CHAPTER 1
The Sine Qua Non of Success

What makes for success in today’s world? One thing is sure: It’s not what it used to be. That’s because the nature of human enterprise has shifted radically from prior decades. During the 20th century, three broad segments could be identified—white-collar professionals, blue-collar laborers, and clerical workers. This segmentation of the labor force has long been considered outmoded.1 The key emergent sector of labor is composed of workers whose primary activity is to work not with their hands, but with their minds. Increasingly, the workplace demands people who know how to gather, interpret, and act on information, not assembly line parts or raw materials. Former secretary of labor Robert Reich called these workers symbolic analysts.2 They constitute the most highly sought-after workers in every field, and they are the best rewarded monetarily. Human intelligence is now the unrivaled capital resource of the 21st century workforce.

The economic value of intelligent minds is supported by data showing that more education usually translates to better pay. This is not too surprising—we expect that, on average, a college graduate will be paid more than a worker with a high school education. This pay gap, commonly known as the wage premium or earnings differential, is well established. More interesting is that wage premiums have been growing over time. Gaps in wages have increased among workers with varying levels of education—those holding advanced degrees, college graduates, high school graduates, and workers with less than a high school education. Over the past three decades, pay rates have been drifting upward for workers with bachelor’s or advanced degrees. For high school graduates, hourly wages have remained flat or trended slightly down, and workers with less than a high school diploma earned significantly less each year.3

What is responsible for the growing wage premium? It seems as though supply and demand ought to figure into the rising economic value of a highly educated workforce. If so, this presents a puzzle because a growing proportion of the labor force is highly educated. By itself, a greater supply of highly educated workers ought to drive down wages. The obvious conclusion is that the rising wage premium is associated with increasing demand. As never before, the labor market needs highly educated workers and rewards with them with wages that reflect their economic value. That value reflects not only a more knowledgeable worker, but also a more intelligent one—a college degree translates into a 10- to 15-point IQ advantage.4 A university degree is therefore a proxy for a more intelligent mind.

Mental proficiency is demanded by the world economy partly because of the massive infusion of computers into the workplace and every corner of human endeavor. Until the invention of the computer, mental work was the province of the human mind. The succession of technologies through the ages, accelerating in the Industrial Revolution of the 1800s, greatly leveraged muscle power. Labor-saving inventions resulted in economic efficiencies that led to greater democratization of wealth and a growing middle class. There were downsides as well, including risks of a ravaged environment and a dehumanized worker.

As late as 1970, blue-collar and clerical workers could realistically pursue the American Dream. This is no longer the case. Skilled labor, though not completely outmoded, has for years been at risk for replacement by robotic technologies or outsourcing to whatever nation can supply the cheapest labor. Long gone are the days when skilled labor promised a pathway to the middle class. Physical labor and routine clerical work have been downgraded in their value to the economy as evidenced by declining pay rates and by the disturbing term “the working poor.” The rise of symbolic analysts signals an irrevocable shift in the nature of work. Over the past few decades, intellectual ability has moved to center stage. Mental work, not physical labor, is the engine of economic growth and prosperity.

For all the profound effects of technology on life and work, what was protected until recently was the human mind’s monopoly over information processing. With the advent of digital computing, machines began to encroach on this domain with rising insistence. Never before had civilizations faced this most remarkable of human inventions—machines that could think. Labor adapted by trying to understand what indispensable roles the human mind could play as computers took over many functions of processing and storing information. Over time, machines became more than complements; now they were competitors. Consequently, the human mind is forced to concentrate on the things it does best. The question we must ask is: What functions of the human mind cannot be imitated by any machine, no matter how sophisticated?

Intelligence, the hub of human intellectual capability, is the most empowering technology of all. But to capitalize on this potential, we must try to understand specific expressions of intelligence on which the human mind has a monopoly—the things it can do better than any machine. We can begin by noting that computers excel at any task that can be reduced to a set of logical rules. Even this simple fact has consequences. Clerical workers once identified as secretaries no longer function simply as conveyers of information—taking notes, retrieving files, and composing memos. Administrative assistants must perform a range of tasks from managing large databases to coordinating complex schedules and timelines, as well as tracking priorities and deadlines. In this way clerical workers adapted by drawing more fully on skills best identified with the human intellect. Computers have transformed blue-collar labor as well. Machines that are computer-controlled help explain declines of blue-collar workers. Routine work, whether cognitive or manual, is increasingly consigned to computers or computer-controlled machinery.

What, then, is the province of the mind? The human intellect seems to be unrivaled in at least two ways. First, it is the world’s best problem solver. Whereas computers deal nimbly with routines, the mind is fantastically good at making sense of complex information and using that information to solve problems. Through its capacity for creative insight, the human intellect can detect patterns and pathways that often remain opaque to even the most advanced computational devices. When ambiguity complicates the picture of any worthwhile endeavor or enterprise, computers operate best as helpful complements. The human mind must lead the way.

The second signature strength of the human intellect is in coordinating work that requires facile social interaction. To be productive in such situations, fluid communication must be combined with sensitivity to the emotional tone of interactions. This means the effective worker must take into account the myriad ways in which people differ—in personality, drive, work habits, acumen, experience, and cultural norms. Sensitivity to human differences underlies the ability to achieve functional collaboration to reach important goals. Workers must increasingly work in varied teams and across international and cultural boundaries. On these two key aspects of modern work—expert problem solving and complex communication—the human mind still reigns supreme.5

As the nature of work has shifted from physical labor in factories to mental production of inventive ideas, computers have forced the question of what only human beings can do well.6 We now know the answer: The key human resource is intelligence—especially those forms of intelligent expression performed exclusively by capable minds. Intelligence in its various manifestations is the invisible dynamo of the global economy. It’s also essential to solving complex social and environmental issues. Communities and nations alike face seemingly intractable problems of environmental threat, depletion of nonrenewable energy sources, ethnic tension, and political strife. The real world presents a tangle of troubles so complex that it’s easy to despair. Indeed, there is little hope of solving our most recalcitrant problems apart from a tremendous reserve of human intelligence, allied with wisdom and goodwill.

In this book we will explore why intelligence is vital to human effectiveness in school, in the workplace, and in everyday life. We will probe the meaning of the word intelligence, how conceptions of intelligence have changed over time, and what the best current research on intelligence says about its nature. Most important, we will pose the question, “Can intelligence be increased?” We will find that the answer to that question opens wide vistas of hope and possibility for individuals and communities—and, indeed, for the entire human race.

MEASURING THE MIND

The scientific study of intelligence can be traced back to the mid-19th century. In 1869, the British scholar Francis Galton published a book entitled Hereditary Genius in which he declared that people differ greatly in their intellectual gifts, and that those differences are inborn. In his book, Galton asserted these assumptions plainly:

I have no patience with the hypothesis … that babies are born pretty much alike…. I object to pretensions of natural equality. The experiences of the nursery, the school, the University, and of professional careers, are chains of proof to the contrary.7

Yet Galton was no mere ideologue. He was prepared to put his ideas to the test using both biographical and scientific data. In his biographical analysis, Galton identified 1,000 men whom he regarded as “geniuses.” These were men whose intellectual and creative leadership left an enduring mark on society and culture. Galton assumed that eminence was rare, occurring about once in every thousand men. The question was whether this rare quality, genius, was randomly scattered or whether eminence ran in families. Galton’s assumptions seemed to be confirmed: Men who were famous for their enduring accomplishments were more likely than their more common peers to have eminent sons and grandsons. Galton had a ready explanation: The cause of familial eminence was that highly accomplished relatives shared a common pool of superior genes (thus the title of his book, Hereditary Genius). Notice that Galton’s hypothesis hints strongly at a similar concept from the theory of evolution, biological fitness. The connection is no accident: Galton’s cousin was the famous proponent of evolution by natural selection, Charles Darwin.

Galton emphasized genetics and downplayed the environment as the basis for eminence.8 One reason for doing so is that many of the eminent men Galton studied grew up in poor families.9 Galton thought that if genius could arise from poverty, then its source must be genetic inevitability. But this tendency to favor nature rather than nurture may not have been a purely scientific deduction. As a member of the British social elite, Galton may have wanted to justify the social position of the wealthy British class by proving that their (and his) high station was a natural consequence of superior biology. If Galton’s hypothesis could be proven correct, then social stratification was as expectable as any other fact of nature. Though Galton’s motives can be questioned, his zeal for measurable data to test his hypothesis was genuine. He knew that sweeping claims about human nature must be backed by evidence, and he launched an ambitious program of research to explore these claims.

By virtue of his quirky personality, Galton was perfectly suited to test his ideas. He collected mountains of data bearing on his research questions. Galton was particularly enamored with quantifying human characteristics, sometimes to the point of obsession. Galton was even reputed to rank the beauty of women he observed in various cities of Britain by surreptitiously recording his ratings of female passersby on paper tucked into his pocket. His serious work, though, concentrated not on superficial qualities, but on low-level biological capabilities, such as the ability to distinguish between fine differences in colors, tones, and weights. Here we see Galton’s assumptions asserted: If people differ in their fundamental biology, then those differences will eventually give rise to variability in their achievements in the world. To understand giftedness at the level of basic biology, Galton studied the activities of wool sorters, tea tasters, and piano tuners, believing that their honed skills were based on their abilities to sense fine distinctions through their senses.10

Galton was enterprising in his data collection. At the International Health Exhibition held in London in 1884, he charged participants a fee to take measurements of their sensory acuity, reaction time, and the like. With data in hand, Galton compared the biological characteristics of common Londoners with Fellows of the Royal Society.11 The result? Galton’s hypothesis was not confirmed: Wealthy British men exhibited no obvious biological superiority on Galton’s measurements.12 Francis Galton’s grand hypothesis—that higher social echelons were distinguished by measurable biological superiority—was not backed by evidence. Yet Galton’s place in the history had already been established: His obsession, the quantification of human characteristics, became foundational to the branch of psychology known as psychometrics. Anyone who has taken a standardized test has experienced the practical consequences of Galton’s legacy, because the science of psychometrics underlies the construction, analysis, and interpretation of test results. For this profound contribution to science, Galton is recognized as the father of mental measurement.

Galton made additional contributions to the science of human traits. For example, he identified connections between psychological characteristics and genetic heritability, echoing the theory of his cousin Charles Darwin in The Origin of Species. Galton also distinguished between identical and fraternal twins, noting that identical twins were more likely to share psychological traits. The identical/fraternal distinction later proved to be profoundly important to modern studies on the heritability of traits. Collectively, Galton’s insights became foundational to the modern field of behavioral genetics, which quantifies the contribution of genes to human variation in disease, mental illness, personality, intelligence, and other traits. Although Francis Galton was mistaken in many of his beliefs and assumptions, equally often he was tremendously insightful. His legacy of ideas, insights, and methods extends to this very day, illuminating the biological basis of human characteristics in their panoramic diversity.

THE FIRST INTELLIGENCE TESTS

In Victorian England, Francis Galton launched a new paradigm of scientific inquiry by quantifying human traits, and by using precise measurements to test his theories with a degree of objectivity. Galton focused on such low-level biological traits as reaction time, by implication neglecting higher-level thinking. But if low-level traits could be quantified, why not higher-level expressions of memory, comprehension, and problem solving—the kinds of mental activity that we readily associate with the word intelligence? Although Galton himself did not pursue this possibility, it wasn’t long before other scholars extended the new field of mental measurement to include high-order cognition. In the late 1890s, the French psychologist Alfred Binet took measurement exactly in this crucial new direction, with profound results.

Just before the turn of the century, administrators of the Paris school system presented Alfred Binet, already a famous psychologist, with a challenging problem. The administrators wanted to understand why some children were experiencing difficulty learning in school. Their concerns were practical. They wanted to separate students into two categories: children whose learning problems were caused by low intellectual ability and those who had the mental capability to succeed but failed for other reasons, such as poor motivation.13 The school system commissioned Binet to solve this puzzle, namely, to find a way to distinguish between the two categories of students. In response, Binet constructed what we now recognize as the world’s first true test of intelligence. It included puzzles, memory games, and questions about general knowledge. It was a grab bag of intellectual probes, not a coherent set of questions, yet it got the job done. Binet’s test was surprisingly accurate at predicting children’s school success. The test identified children unlikely to thrive in normal school settings—those who required adaptations if they were to succeed.

In devising his test, Alfred Binet shifted the focus of attention to higher-level mental activity—a huge change from Galton’s method. Importantly, Binet also found a way to place these measurements on a common scale. Binet’s innovation was “mental age.” The concept is straightforward. If child’s mental ability matches the average of peers of the same chronological age, then his mental age and chronological age are identical. More typically, though, his mental age will be somewhat lower or higher than the average of his peers. For this reason, it’s not only possible but important to separate chronological age from mental age. For example, a precocious six-year-old child might exhibit mental qualities more typical of a seven-year-old. In this case, the child’s chronological age is six, but his mental age is seven. To make this distinction, Binet established baselines of performance for children of different ages; he had to determine in fairly exact terms what level of performance was average for children at each age. In the language of testing, Binet needed to establish norms for each age group. Once those norms were established, Binet could designate a mental age to each child based on that child’s test performance.

The separation of mental age from chronological age was critical in the budding theory of human intelligence, but one additional innovation was needed. The trick was to compute a ratio between the two. This was not Binet’s invention, but rather the contribution of the German psychologist Wilhelm Stern.14 Stern’s formula was simple:

Intelligence quotient (IQ) = Mental age/Chronological age × 100

The formula shows why IQ is a quotient—it’s the answer to a division problem. When mental age is divided by chronological age, it yields a fraction. In the simplest case, a child’s performance is exactly average for his peer group. If so, then mental age and chronological age are identical, and their ratio is 1. Multiplied by a 100, this gives an IQ of 100, which is the critical number on the IQ scale. Using Stern’s formula, IQs of less than 100 are below average because they indicate that a child’s mental age falls below his chronological age. When IQ is greater than 100, then the child is above average: His mental age is higher than his chronological age.

Although a remarkable advance, the IQ formula has a few serious limitations. By itself, it says nothing about what a child can actually do. It quantifies intelligence without “qualifying” it. The IQ scale is completely relative because it is computed only by comparisons with other people. This relativistic orientation—the anchoring of IQ to comparisons with others—has continued to the present, but with one important change. The original method of dividing mental age by chronological age is now considered obsolete. To understand why, consider how the original IQ formula would apply to adults. The concept of mental age works fairly well for children, but is quite meaningless for adults. An eight-year-old girl might be pleased to know that her mental age is 16, which is double her chronological age. Would she be pleased if, at age 20, she were told that she had the mental age of a 40-year-old woman? Clearly, the meaning of mental age breaks down in adult populations, and so the calculation of an intelligence quotient—mental age divided by chronological age—is not serviceable. Another method for the quantification of intelligence was needed.

The now common method for computing IQ is based on the famous, or perhaps infamous, bell curve. The bell curve, also called the normal distribution, is used not only to measure intelligence. It also depicts the typical variation in many other human qualities. Height, for example, varies greatly but not evenly. Very tall and very short people are rare; people of medium height are commonplace. When population height is depicted on a graph, the expected pattern is a bulge in the middle with symmetrical tapering on both ends. The normal distribution was not derived from the study of height or intelligence. Rather, it is a mathematical pattern that approximates many qualities found in nature. Human intelligence, it turns out, has qualities that make the normal distribution a reasonable approximation: Like height, measured intelligence clusters around medium values, and extreme values are somewhat rare. But the normal distribution is an approximation rather than a conclusion about intelligence. In reality, the human population has too many extreme scores—both high and low IQ—to make the normal distribution 100% accurate.15

One handy feature of the bell curve is that any measurement, say your own height, does not need to be expressed in particular units, such as feet or meters. Instead, all measurements can be converted to a common scale. This little feat of magic is accomplished by expressing your height only in terms of how it compares to the entire population. Is your height exactly average? Above? Below? The normal distribution allows you to say exactly how much any measurement—height, IQ, or whatever—differs from the population average. The departure from average is measured in units called standard deviations, symbolized s, which function as the common denominator. The height of a six-foot man, for example, might translate to half a standard deviation above average, or above the mean, for men. A woman whose stature is six feet would be far above average. To present this fact fairly, the proper comparison group would be other women. Such a comparison might show that her height translates to two standard deviations above the mean for women. The convenience of translating heights to means and standard deviations is that it places the heights of men and women on a common scale. It shows how much a six-foot-tall man exceeds the average for men, and how much a six-foot-tall woman exceeds her comparison group, other women.

Other traits, such as weight, work in just the same way. Although height and weight are different measurements, both can be expressed in standard deviation units. Quantities that are even more unlike can be placed on a common metric. This can lead to informative comparisons. Knowing that a man’s height is half a standard deviation above the mean and his weight is half a standard deviation below tells you something about the man’s body type—he’s extremely lean. And here’s the amazing part: Vastly different qualities of the same man—blood pressure, shoe size, visual acuity, grip strength, reaction time, and IQ—can also be placed on the bell curve and expressed in standard deviation units.

When IQ is measured in standard deviation units, different IQ tests can be translated to a common scale. This means that IQ measured on a Stanford-Binet scale can be compared to the Wechsler Intelligence Scale for Children (WISC), or to any other intelligence test. Through the convenience of translation to a common scale, possibilities open for developing innovative tests of intelligence without losing the benefits of translation to the familiar IQ score. By emphasizing comparison, the bell curve represents a departure from early methods of quantifying intelligence based on absolute units, such as reaction time. Scores based on the normal curve are strictly comparative: They always show how one person performs relative to other people. This method has advantages, as we have seen, yet it also has one noteworthy disadvantage: It separates the measurement of intelligence from how people think and behave—the specific performances that form the substance of intelligence. We are left only with a number—an important number, granted, but one that is removed from the specific mental activities that went into it. The number tells us “how much,” but says nothing much about the all-important qualities that are tucked away inside the convenient quantification of intelligence as IQ.

Can we gain any insight into the kinds of thinking that constitute intelligence? Alfred Binet was largely agnostic on the matter. He focused almost entirely on solving a practical problem: predicting children’s ability to succeed in Parisian schools. Theoretical questions about whether intelligence could be dissected and possibly engineered were largely deferred. Yet Binet’s practical contributions to the nascent science of intelligence were significant enough. He developed a functional technology, a test of mental capability that could predict school success. That capability was distinct from school learning and somehow more elemental. Also, and in contrast to Francis Galton, Binet shifted attention from the biological realm to the cognitive, and especially to higher-order thinking. This shift had immediate practical value to education in Paris and, later, around the world. For the emerging field of human intelligence, the consequences were even more significant. Through Alfred Binet, the groundwork was laid for an emerging new science whose focus was one of the most engaging enigmas imaginable—the nature of human intelligence.

INTELLIGENCE AND SCHOOL SUCCESS

Neither Francis Galton nor Alfred Binet was much interested in measuring intelligence for its own sake. Their measurements were in the service of predicting something else they deemed important. Galton wanted to predict social standing, and assumed that “eminence” in science, art, or leadership was the pinnacle of societal status. Binet’s efforts were directed more narrowly toward predicting academic success in school settings. Besides measuring human characteristics in their own distinct ways, both men were interested in predicting variables that were proxies for success in some form. At that time and up to the present, measured human characteristics have evidenced a surprising ability to predict socially valued outcomes. IQ tests, in particular, have proven to be good predictors of success in schools and other settings. Over time, that predictive power has improved as test designs have become more sophisticated.

A key figure in this progression toward greater predictive power was the Stanford psychologist Lewis Terman. In the early 1900s, Terman published an English-language translation of Binet’s test, added new test items, and established accurate age norms through extensive field testing. Terman also adopted Wilhelm Stern’s handy formula for IQ as a ratio of mental age to chronological age. In practical terms, Lewis Terman was responsible for the rising popularity of intelligence testing in the United States. Terman’s refinement of Binet’s original intelligence test became famous, and to this day it remains one of the major commercial intelligence tests, the Stanford-Binet IQ scale.

Close cousins of intelligence tests have been applied to the prediction of academic success in the university. Known first as the “College Boards,” and later as the Scholastic Aptitude Test (SAT), these instruments are still used widely in selective admissions to colleges. They tended to be modeled after IQ scales both in form and concept. In form, the tests employed easily scored test items, such as math problems or verbal analogies that could be answered in the multiple-choice format (although in recent years an essay writing component has been added to the SAT). In concept, the SAT and its precursors assumed that examinees had a latent potential—an aptitude—that is distinct from the student’s actual performance on future university coursework. It was further assumed that this latent potential could be measured, and the resulting score would predict college course grades. This would justify the use of SAT scores as one factor to consider when filtering applicants during the admissions decision process.

Like IQ, scores on the SAT and its equivalent for graduate education, the Graduate Record Examination (GRE), are expressed in standard deviation units. On these tests, the mean score was initially set at 500 and the standard deviation at 100. This standardization presents a distinct advantage of comparability from year to year. Because performance is reported in terms of means and standard deviations, every new version of the SAT and GRE can be placed on a common scale, and examinees from this year can be compared with last year’s and next year’s examinees. A second advantage is that every point along the normal curve is translatable to percentiles. The mean translates exactly to the 50th percentile. One standard deviation above the mean translates to the 84th percentile; a score two standard deviations above average translates to the 98th percentile. The same conversion to percentiles can be carried out with precision all along the bell curve.

Tests that predict academic potential have not escaped criticism. IQ tests are easily faulted as artificial, for example. They are administered in carefully controlled settings using established procedures, and because of time constraints, IQ scores are based on a very limited sample of behavior.16 IQ tests have also been criticized for having unfortunate effects. For example, they are sometimes used as filtering mechanisms to separate students into categories with labels that can be pejorative, limiting, and possibly self-fulfilling.17 Mental retardation is one of those labels. The once technical terms, idiot and imbecile, have long been regarded as unacceptably stigmatizing and offensive. Categorization and treatment plans that are regular features of special education for low-IQ children are rightly questioned. Does the child experience an overall benefit from assessments that recognize the child as “special” and treat him as such? The answer is not always clear. Yet some commentators have argued that the use of IQ tests in diagnosing mental retardation was an overall gain for humane treatment of people whose intellectual abilities were far below normal.18 Historically, the diagnosis of mental retardation made possible by IQ testing resulted in less frequent institutionalization. IQ tests also paved the way for programs of research showing that people with mental retardation could learn and remember information, competencies that allowed them to participate in a complex society and experience a rewarding life.

Tests of academic aptitude have other potential benefits as well, such as promoting fairness and a more equitable society. Arguably, an SAT or IQ test could provide universities with an objective means of comparing applicants. Such a test could, in theory, identify highly talented students from poor families who lacked the proper social connections or who did not enter the pipeline of prep schools that channeled its graduates to the most selective colleges. Such was the case in England in the early 20th century. Until 1939, only about 15 percent of English children attended secondary schools. At the time, some scholars argued that IQ tests could identify talented children who would ordinarily be denied access to a secondary education because of their rural locations or undistinguished social backgrounds. When such tests were later discontinued, fewer children from working class families were selected into the best secondary schools.19 Some critics have argued that such tests as the SAT are, by effect or by design, instruments that deny access to higher education to poor or ethnically underrepresented students. Such effects are ironic when considered against the tests’ original intent. As in admissions to elite secondary schools in England, entry to Harvard, Yale, and other elite colleges were once swayed by a family’s social reputation and connections to alumni. The College Boards were designed to correct this bias and facilitate poorer students’ access to higher education, thereby uncovering the presumed hidden intellectual talent that would otherwise be wasted.

Though both IQ tests and university admissions tests have been criticized, they continue to be widely used because to a measurable degree such tests work: They predict academic success. That prediction is far from perfect, of course. Some people earn high IQ or SAT scores yet are lackluster students in the classroom for any number of reasons—low motivation, inadequate instruction, poverty, or emotional trauma. Other students earn low scores but exceed their predicted performance levels if they compensate with self-discipline and an unstoppable drive to succeed. Thus, to say that the predictive power of IQ is imperfect is indeed true, but IQ is also not precise. For greater precision, we must quantify that predictive power. In statistics, predictive strength is most often computed through a correlation, symbolized r. Perfect prediction is indicated by a correlation value of 1, symbolized r = 1.0. If a test offers no predictive power at all, then r = .0. How good is IQ? As a rough approximation, the correlation between IQ and school success is about r = .50.20 This means that IQ offers quite good prediction of academic achievement, but—as every teacher and parent knows—other factors also help determine a student’s level of success.

In isolation, statistics are neither interesting nor illuminating. They require interpretation to be relevant. To interpret the correlation of IQ with school success, what’s important is not the mathematical fact of prediction. The really important implication is the relationship between IQ scores and academic success.21 Whatever constitutes IQ is somehow foundational to learning in schools and universities. We must interpret intelligence not as a mere number, but as set of enabling skills that deserves our attention. If intelligence is foundational to learning, and therefore to subsequent personal achievement, we need to understand it more fully. We must press for answers to the question: What human capacity is captured in an IQ score that prepares the mind for success in the academic settings and possibly other arenas?

INTELLIGENCE AND WORKPLACE SUCCESS

Soon after the pioneering work of Alfred Binet and Lewis Terman, IQ tests were discovered to be useful in settings other than schools. Just as IQ scores predicted success in the classroom, so they could predict effectiveness in the workplace. The American military was quick to capitalize on this fact. The armed forces made wide use of IQ tests and related measures of specialized abilities, such as mechanical aptitude, to assign military recruits to jobs. The Army Alpha and Army Beta tests, in particular, played a huge role in the rapid assignment of almost two million recruits to military duties during World War I.22 Army Alpha required recruits to read the questions; the Army Beta version used pictures to minimize requirements for understanding written English.23

The lessons learned by military psychologists were soon picked up by psychologists who studied performance in civilian work settings. Industrial-organizational (I/O) psychologists confirmed that IQ tests were good predictors of workplace performance in the office and on the factory floor. As in school settings, IQ tests offered moderate, though imperfect, prediction of workplace success. The correlations between IQ scores and job success were again about r = .50.24 Interestingly, predictive power of IQ was somewhat higher for performance in complex jobs, about r = .60.25 This heightened correlation hints of something important. Around the world the most valued forms of labor entail greater informational complexity. Intelligence is absolutely vital to the modern workplace. The amplified power of IQ to predict complex work is a strong clue that intelligence will become even more important in the future.

INTELLIGENCE AND LIFE SUCCESS

A century of research tells us that intelligence predicts success in schools and on the job. This is impressive enough, but we have evidence that intelligence applies even more broadly. Intelligence also predicts health and longevity: Higher IQ scores are associated with decreased susceptibility to disease as well as longer life span.26 Intelligence also predicts success and fulfillment in life. The earliest study to investigate the lifelong influences of high IQ was led by Stanford professor Lewis Terman, the same scholar who translated and popularized Binet’s original scale. Terman wondered if high IQ during childhood resulted in long-term, measurable effects later in life. He posed an intriguing question, “What sort of adult does the typical gifted child become?”27 Starting in 1921, Terman used the Stanford-Binet Intelligence scale to identify about 1,500 children, first graders through eighth graders, whose IQs were among the top 1%. Almost all of the children had an IQ of 140 or higher. Their family backgrounds were varied, but they tended to be above average socioeconomically. For the next 70 years, Lewis Terman, along with his collaborators and successors, tracked this large group of California schoolchildren in depth. The project became the longest running longitudinal study of how psychological characteristics shape life pathways.

The lives of high-IQ children—playfully called “Terman’s Termites”—were indeed distinguished from peers with average IQs. Some divergences were quite predictable. For example, a higher proportion of Terman’s subjects completed college and earned advanced degrees. The “Termites” were also more likely than average to write books, make important discoveries, and to be elected to prestigious professional societies. Less predictable was the finding that childhood IQ was associated with broader outcomes, including health, longevity, quality of life, and well-being.28 Although the high-IQ children were not immune from personal struggles, they were more likely than their peers to grow up to become successful, well-adjusted adults.29 Other research affirms the converse truth: Low IQ is a liability. Measured low intelligence is associated with a host of poor social outcomes, including poverty, criminality, and behaviors associated with premature mortality.30, 31

It’s not too mysterious why intelligence is a good predictor of success in the academic and job arenas, as well as in other life contexts: Intelligence is a marvelously adaptive feature of the human mind. Yet, as everyone knows, this power is not always used for good. The ability to form abstractions, manipulate symbols, and learn large bodies of knowledge can be used for evil. Indeed, throughout history, intelligence has been used to promote malevolent purposes.32 Hitler’s demagoguery is perhaps the most vivid example imaginable, yet equally disturbing is that the expansion of Nazism was not the result of one man acting alone. It took coordination among the many military and intellectual leaders who joined Hitler’s cause. The lesson never to be forgotten is that human intelligence can wreak horrific destruction and generate incalculable human suffering. Reflecting upon the lessons of history, one psychologist observed that “the brilliant mind can be the most destructive force in the world.”33

With the passage of many years, it may seem hard to believe that large numbers of people—including many whose intelligence was well above average—could be persuaded to comply with purposes so evil as those promulgated by Nazi ideology. Yet psychological experiments conducted in the United States proved that the average person is far more vulnerable to manipulation by authority figures than we might assume. In classic experiments by psychologist Stanley Milgram, everyday people displayed a startling willingness to administer electrical shocks—or what they believed to be electrical shocks—to other people when they were told they must do so.34 Compliance to the demands of the “lab assistant,” who in reality was only acting the role of an authority figure in Milgram’s experiments, was so complete that many people continued to administer shocks even though they could hear screams and pleas for mercy coming from the adjacent room.

A backward glance at history disabuses us of any illusions that intelligence is an unmitigated good. Plainly, the human psyche is capable of extraordinary cruelty and susceptible to manipulation toward evil ends. To advance the common good, intelligence must be allied with moral understanding and commitment. Moral commitments take on different forms, including a caring regard for other human beings, a healthy self-love, and an abiding insistence on fairness and justice. Even though intelligence will not always be used to advance positive moral ends, it can be argued that without intelligence the chances of solving society’s most pressing problems and challenges are slim. Because our world is complex, advancing moral good at both local and global levels requires that participants have the ability to use their minds exceptionally well. Any realistic vision of a bright future requires that the denizens of planet Earth have abundant intellectual resources.

As we redirect our focus from the larger society back to the individual, let us bear in mind what research shows so clearly: Intelligence is hugely relevant to life prospects. In the pursuit of broad life success, intelligence is an asset without peer. Intelligence is doubly important in the contexts of fast-moving, complex, and interactive cultures that define our contemporary world. In three vital contexts—school, work, and everyday life—human intelligence conveys a potent advantage.35 It predicts achievement and, more relevant functionally, gives its possessor the intellectual tools needed for success across a wide swath of life contexts. We can generalize this way: Intelligence is the raw material for human effectiveness. Intelligence never guarantees success, but to have at one’s disposal the resources of a sharp and nimble mind is an advantage without rival for every citizen of the 21st century.

THE THRESHOLD HYPOTHESIS

Even if you already believe that intelligence is a tremendous personal resource, it’s fair to ask: Is more intelligence always better? Some psychologists question whether it is. A moderately high IQ is empowering, but a super-high IQ of say 150 is perhaps unnecessary. Historically, high-status jobs such as law or medicine have been largely restricted to those with at least a moderately high IQ.36 Maybe what’s important for complex work is nothing more than a moderately high IQ of about 120. We can think of this idea as the “threshold hypothesis.” It implies that as long as mental acumen reaches a threshold that facilitates skill in processing abstract and complex information, then unlimited possibilities for achievement open up. Above the threshold, achievement becomes a function of personal values and commitments, such as ambition, persistence, and unrelenting commitment to hard work.37 But below a threshold of 120 or so, a person will necessarily be limited in what he or she can achieve in a complex society.

Some evidence supports the threshold hypothesis. In the absence of moderately high levels of intelligence, lasting achievements in complex fields are rare.38 While some data do support the threshold hypothesis, other data show that when it comes to IQ, more is better. A super-high intelligence can convey measurable benefits over a “merely” high IQ, especially in highly technical fields.39 So, what’s the answer? Is a moderately high IQ good enough, or do very high levels of intelligence help, especially as a basis for history’s most remarkable and enduring achievements?

To look for answers to these questions, we can turn again to research from Lewis Terman’s lab at Stanford in the 1920s. When Professor Terman was conducting his famous study of high-IQ children, his collaborator, Catherine Cox, was carrying out an independent investigation of 301 eminent achievers of Western civilization. These were luminaries who made contributions to science, art, music, politics, and philosophy from the Renaissance onward. Cox wanted to know whether these eminent men and women displayed evidence of high intelligence as children and young adults. To investigate, she and two assistants used historical and biographical information to estimate each person’s IQ during childhood (from birth to age 17) and during early adulthood (from age 18 through 26). These IQ estimates were based on the ages at which the children and young adults reached developmental milestones and made significant intellectual achievements.

In 1926, Catherine Cox published a monograph of her findings, concluding that eminent men and women displayed remarkable achievements very early in life.40 She noted, for example, that: “Voltaire wrote verses from the cradle; Coleridge at 2 could read a chapter from the bible; Mozart composed a minuet at 5. Goethe, at age 8, produced a literary work of adult superiority.” Stunning precocity was in every case followed by significant and enduring achievements during adulthood. Consequently, Catherine Cox’s estimates of IQ, though variable from person to person, were all far above average. Summarizing the IQ estimates for all 301 “geniuses,” Cox found that the average was “not below 155 and probably as high as 165.” Examples include: Jean Jacques Rousseau (125), Nicolas Copernicus (130), Rembrandt Van Rijn (135), Martin Luther (145), Charles Darwin (140), Abraham Lincoln (140), Leonardo da Vinci (150), Thomas Jefferson (150), Wolfgang Amadeus Mozart (150), Charlotte Bronte (155), Michelangelo Buonarroti (160), Galileo Galilei (165), Samuel Taylor Coleridge (165), Isaac Newton (170), John Stuart Mill (170), Gottfried Wilhelm Leibnitz (190), and Wolfgang Goethe (200). These estimates, if even approximately correct, suggest that levels of intelligence in the range of 135 to 180 are characteristic of men and women whose accomplishments leave a lasting trace on society and culture. Even in those rarefied strata, however, something like a secondary threshold can be discerned: The IQs of eminent mathematicians do not differ from the average IQ of PhDs in mathematics.41 Apparently, intelligence functions as a necessary but not sufficient condition for achievement at high and very high IQ bands. Even in the IQ stratosphere, a capable intellect must be combined with vital personal qualities, such as vision and drive.

The cognitive resources that constitute high or super-high intelligence are desirable for anyone who wants to build academic and career success—as well as success in broader life contests. But that leaves open the key question: Must everyone be intelligent? One could argue that every society needs workers whose daily responsibilities do not require much by way of brainpower. Yet the case for an IQ-differentiated society is weak for several reasons. First, the unskilled labor sector, which typically requires no more than low-level intellectual skills, is shrinking rapidly. Second, because of rising expectations for efficiency, cost savings, and coordination of workflow, even semiskilled employees draw upon the ability to plan and solve problems. Third, and most important, a free society insists on opportunity for all. Even if a worker chooses work that is less intellectual by nature, then at least that person is given a choice in the matter.

If we agree that a highly capable intellect is desirable for everyone rather than a select few, another question looms: Won’t every population always exhibit a range of intelligence—people who are very smart and others who are less so? Stated in those terms, the answer must be yes. But that simple answer omits much. We can readily acknowledge that intelligence, like every personal trait, inevitably varies from person to person. In fact, without variation the construct of intelligence is meaningless and its measurement is pointless. But even if variability is a permanent feature in human populations, this does not mean that the average level of intelligence cannot shift over time. Drawing on a crude analogy, personal computers are manufactured with a wide range of specifications at any point in time. Different models exhibit variability. But even though variation is an ongoing fact, the average level of computational power rises year after year.

A wholesale shift in a population’s average level of intelligence, measured as IQ, would have tremendous consequences for any society. To understand why, let’s assume that the threshold hypothesis is correct. Shifting the population IQ upward by 15 points, which is equivalent to one standard deviation, would vastly increase the numbers of adults able to work in highly technical fields, or who could apply their intellectual powers to solving tough problems in less technical domains. A few of those people would eventually make monumental contributions to society in science, art, literature, engineering, and medicine. We could anticipate benefits at the low end of the IQ scale as well. An upward shift of one standard deviation would reduce the numbers of children and adults classified as developmentally delayed. These effects would be practical: Many people, otherwise heavily dependent on societal support, would have the cognitive resources to live more self-determined, productive lives and to be less at risk for poverty, criminality, and other social ills.

WHY WE MUST BECOME MORE INTELLIGENT

Let’s be realistic: If intelligence is static and unmovable, then scenarios of a brighter human race are no more than utopian fantasies—amusing, perhaps, but inconsequential. But if intelligence can be altered, the implications are powerful, perhaps even revolutionary. Can intelligence be changed? The thesis of this book is predicated on a positive answer to the question. If intelligence can be enhanced, then that one proposition becomes a conceptual anchoring point from which a multiplicity of exciting prospects follow. Among the most important is greater reason to hope for progress in addressing the world’s most thorny problems—disease, terror, and the threat of ecological disaster. For any prospect of a bright and hopeful future, we need many intelligent minds to be focused and hard at work.

Any credible aspiration to raise the intelligence of the earth’s population rests on more than the possibility of doing so. It also requires that we understand what intelligence is. To this point we have done little to probe its nature. We must dig deeper. We have already entertained the question: What is intelligence good for? Let’s now ask something much more fundamental: What is intelligence?