I was bold in the pursuit of knowledge, never fearing to follow truth and reason to whatever results they led, and bearding every authority which stood in their way.
—Thomas Jefferson
Nullius in verba (translation: “Take nobody’s word for it”)
—Motto of the Royal Society of London
Bernhard Dohrmann is a businessman and entrepreneur of wide-ranging interests. Unfortunately, he has also had his share of legal problems. In 1975, he was convicted of securities fraud for selling railroad cars that did not exist. In 1982, he was charged by the Federal Trade Commission with misrepresenting the prices of investment diamonds. The case was settled out of court, with Dohrmann’s company returning $6.7 million to investors. In 1991, Dohrmann was charged by the U.S. attorney’s office with sixteen counts of criminal contempt; it seems he lied about his company’s sales figures when selling bonds to investors. He was sentenced to prison for this crime in November 1995.1
With such a history of legal problems, what’s a troubled businessman to do? Why, go into the educational software business, of course!
Dohrmann started a company called Life Success Academy that marketed (and continues to market) Super Teaching. Super Teaching consists of a system that projects images to three screens; the central screen shows whatever images a teacher typically uses in a lesson plan. The flanking screens show “seemingly random” images of nature, or real-time footage of the teacher or the audience. This practice is said to be consistent with “whole brain learning.”2 Systems initially were to sell for $160,000 per classroom;3 the current price is down to $29,500.4
Although Super Teaching had been around since at least 2002, things started to look really promising for the Life Success Academy in December 2007, when the company signed an agreement with the University of Alabama in Huntsville. The university would help test and refine the Super Teaching method and would in return share in profits from future sales. In early October 2008, the university unveiled Super Teaching with a ribbon-cutting ceremony. The president of the university attended, but the honor of cutting the ribbon went—not inappropriately—to Tony Robbins, motivational speaker and late-night infomercial pitchman.5
A year-and-a-half later, the University of Alabama in Huntsville dissolved its relationship with Bernhard Dohrmann and the Life Success Academy.6 Things had heated up six months earlier. A blog that covers Alabama politics had posted a lengthy summary of Dohrmann’s criminal past, provocatively headlined “Why Is UAH Involved with ‘a Very Dangerous Con Man’?”7 A month later, the university’s student newspaper published an article titled “Learning at the Speed of Con.”8
It’s hardly news that an educational Change attracted serious attention despite the fact that there was no evidence supporting it. If that were uncommon, I would have had no reason to write this book. The question that ought to interest us here was nicely put in a USA Today article: “Some observers are wondering why it took the university six months to terminate the relationship after unsavory details of the entrepreneur’s past came to light—and why due diligence did not stop the university from signing the contract in the first place.”9
This point initially seems self-evident: What in the world were officials at the University of Alabama in Huntsville thinking? Dohrmann’s sketchy past would seem to raise plenty of red flags, and fancy detective work was hardly required to find out about it. Googling Dohrmann’s name produces enough relevant information. Furthermore, in 2002, the local Huntsville paper had published on its front page a highly unflattering story about Dohrmann, detailing his criminal record.10 Yet key officials at the university were unapologetic about the partnership. The chief information officer had this to say about Dohrmann’s history: “That’s totally immaterial. People are not vetted for their past. That’s not our normal process here.”11 This argument may strike you as far-fetched: A criminal past of fraud is immaterial when you’re thinking about a business relationship?
But then again, maybe there’s something to this argument. It is not unreasonable to say, “I judge each case on its merits. It’s foolish to base a decision on some guess as to someone’s character; I’m interested in evidence about the program. Super Teaching looked promising, and the university was well protected in the signed contract.”
In fact, you could argue that such an attitude would be consistent with much of what I’ve urged in this book. Let’s not forget our discussion in Chapter Two, in which I pointed out that medieval thinkers venerated authority so much that Oxford University declared that undergraduates were to read only Aristotle and those who defended him. I approvingly recounted the change in how people weighed evidence, in which they ultimately rejected belief based solely on authority and embraced the scientific method. Well? Why is it stupid to believe Aristotle solely because he’s been right in the past, but smart to disbelieve Dohrmann solely because he’s been wrong in the past? In each case, am I not simply assuming that people who have been right in the past will continue to be right, and people who are wrong will continue to be wrong? Should the experience of the Persuader—for better or worse—influence whether or not you believe that the Change will work?
Should you believe what someone says about a subject simply by virtue of his or her authority?a In this book, we’re interested in whether or not a claim is scientifically supported, so the structure of the argument based on scientific authority would look like that shown in Exhibit 6.1.
This looks a little more complicated than it is. We’re really interested in Billy here. Billy doesn’t understand the science behind X. But Billy believes that Simon does. Billy knows that Simon says, “The scientific evidence supports X.” So Billy, without understanding the science, trusts that the scientific evidence supports X. This is belief by authority. Simon is an authority on science, so when he says something about science, Billy is more likely to believe it.
Our goal is to develop some rules as to when the logic outlined in Exhibit 6.1 is sound and when it’s not. That’s why I broke up this relatively simple situation into the three statements. That way we can see cases where the logic fails in each of the statements. We’ll examine four such cases.
First, Billy’s conclusion might be incorrect if Simon actually is not a good scientist. The whole point of believing an authority is that he or she is, in fact, an authority! Small wonder, then, that in education research, Persuaders are often eager to present their credentials. Exhibit 6.2 lists some of the more common earmarks of authority among Persuaders in education.
What should you make of these sources of authority? Some are at least potentially relevant. If you have a degree in a scientific field, that increases the chances that you have some expertise in that field. The depth and breadth of your knowledge are hardly guaranteed, it’s true. Some degrees are simple shams, purchased from online diploma mills; one such “school” granted a master’s degree to a pug dog.12 And it’s no secret that even bona fide universities vary considerably in the rigor of their programs. Still, an advanced degree from a real university is a start. It means that the Persuader committed at the very least one year, and possibly six or more, to serious study of the topic.
I think that having the status of a full-time researcher—at a university, a think tank, or a business—is also a meaningful credential. If someone makes her living as a scientific researcher, you have reason to think that she’s knowledgeable on the subject. Now, researchers at schools of education come in for a lot of criticism. As historian Ellen Condliffe Lagemann put it, “Since the earliest days of university sponsorship, education research has been demeaned by scholars in other fields, ignored by practitioners, and alternatively spoofed and criticized by politicians, policy makers, and members of the public at large.”13 Indeed, some data indicate that even researchers at education schools are not so sure about the quality of research there.14
Two factors have, I think, contributed to the terrible reputation of researchers at education schools. First, many people think that they can judge education research, just as they think that they can judge what makes for a good teacher. Thus, when we hear conclusions drawn by an education researcher that conflict with our own impressions, it rankles in a way that findings from other scientists do not. A biologist could come up with any theory concerning reproduction in the mud dauber wasp, and it wouldn’t run counter to my intuitions. But I have intuitions aplenty when it comes to education, and as we saw in Chapter Two, I have a mental bias to dismiss theories and data that run counter to my beliefs.
A second reason that education research is belittled turns out to be useful to our purposes here. You should remember that not every professor at a school of education is a scientist. Many academic disciplines can be related to education: history, sociology, psychology, critical theory, gender studies, linguistics, economics, political science, and so on. You’ll find representatives of all these disciplines at schools of education. This situation does not lead to an interdisciplinary synergy, resulting in fertile cross-pollination of ideas and perspectives. People more often ignore one another because it’s hard to understand the work of someone from a different discipline: he or she makes different assumptions, uses different tools, and has different goals. Thus education research can look chaotic. Education researchers seem to agree on very little, and that doesn’t give the public the feeling that much progress is being made. For the narrow purposes of this book, you should bear in mind that someone’s status as a professor at a school of education is a reasonably reliable sign that this person’s work has scholarly integrity, but it does not necessarily signify that the person applied scientific methods in evaluating the Change. There are other scholarly methods of understanding the world, and many methods are represented at schools of education.
If the first two items on the list (Exhibit 6.2)—an advanced degree and a research job—are only marginally trustworthy, the remainder are even less so. These “credentials” amount to having been hired to give a speech, to advise people in business or the government, or to answer a reporter’s questions. Each is a vote of confidence, yes, but it’s a vote of confidence given just once, with no opportunity to retract it. For all you know, the Persuader did his consulting at the Fortune 500 company, and the officials there ended up thinking he was a nitwit. Then too, you don’t know the basis on which the Persuader was selected to be a consultant in the first place. To some extent, these earmarks of expertise are self-perpetuating. If you were responsible for selecting a speaker for your school district, wouldn’t you be reassured to see that the person you were thinking about had given many such talks before? “This guy can’t be all bad—look at all the other districts who thought he was an expert.” This is social proof, which we discussed in Chapter One, and it’s self-reinforcing. The more you’re in the public eye, the easier it is to stay in the public eye.
Occasionally you’ll see a ploy akin to reverse psychology: someone will try to gain credibility by saying that experts ridicule him! The Persuader says, “Everyone thinks my theory is wrong. They laugh at me; they don’t take me seriously. Well, they laughed at Galileo! They laughed at the Wright brothers!” This strategy has been dubbed the “Galileo Gambit,” and it’s obviously wrong. A few scientists who were ridiculed turned out to be correct, but not everyone who garners scorn will be vindicated. They laughed at Galileo, but they also laughed at the Three Stooges. I have a sneaking suspicion (but can in no way prove) that when you see the Galileo Gambit used, the Persuader probably does not expect to convince people that ridicule means he’s right; rather he hopes to convince people that even if he is scorned, at least he is not ignored. It’s a backhanded attempt at authority: important people take me seriously enough to grapple with my ideas, although they don’t fully appreciate them. When you see the Galileo Gambit, you are looking at a Persuader whom experts ignore.
Remember, we’re trying to figure out when you should trust an authority, and we’re doing that by examining when authority can go wrong. The first case we’ve seen is one in which Simon is actually not a good scientist. Billy is mistaken about Simon’s credentials. The trustworthiness of a Persuader’s credentials is pretty hard to verify without a considerable investment of your time and, in many cases, without a certain amount of expertise yourself.
There are other ways that you can go wrong when you trust an authority. Simon may be a good scientist, but Billy the Believer may misunderstand Simon’s claim about X. That may happen because Simon’s belief is misrepresented by a third party. It also may happen when someone reads Simon’s work and draws what he sees as a natural conclusion, even though Simon never made the claim at all.
One of the most common of such misunderstandings concerns Howard Gardner’s theory of multiple intelligences. Gardner’s theory claims that the human mind has eight basic intellectual capacities: verbal, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, naturalistic, and mathematical. People often believe that strength in one intelligence can be leveraged to amend a deficit in another intelligence; for example, the student who struggles in math but is talented in music might be helped by putting mathematical concepts to song. Gardner never said that, and, in fact, the idea runs counter to the theory. Gardner sought to prove that these intelligences really are different—for example, that interpersonal (understanding others) and intrapersonal (understanding oneself) intelligences are not different manifestations of the same ability but are fundamentally different types of mental processes. One of the ways that Gardner supported these distinctions was to suggest that the different intelligences use different “mental codes.” By analogy, Microsoft Word and Adobe Photoshop use different types of files—they are not compatible or interchangeable. In the same way, the different intelligences use different mental representations to get their work done.15
So it’s not just that the multiple intelligences theory is silent regarding the fungibility of intelligences; it actually predicts that they are not interchangeable. Why do so many people believe that the theory makes the opposite claim? I have no way of knowing, but I’ve always suspected that it’s raw hope. It would be so wonderful if it were true: the student who has long struggled with math or with reading can suddenly be helped to succeed when his other strengths are recognized. It would be like finding a forgotten key that opens the treasure chest.
Another possibility is that Simon is a good scientist with excellent credentials, but he doesn’t really know anything about X. Nevertheless, he offers an opinion on X, and Billy believes him because Billy doesn’t notice that Simon’s expertise is specific to other matters. Advertisers have long counted on our insensitivity to the specificity of expertise. For example, tennis great Roger Federer endorses products manufactured by Nike and by Wilson. That makes sense; we’d certainly expect an athlete of his caliber to be knowledgeable and discerning about athletic equipment. Federer also endorses Mercedes Benz cars and Rolex watches. Now this is a bit more of a stretch, but we could say, “Well, he’s a rich guy, so he probably knows more about the finer things in life than the rest of us. He’s not a watchmaker, sure, but maybe he understands what makes a Rolex glamorous.” But it’s really hard to make a case for Federer’s endorsement of Gillette shaving gel; in what sense is he an authority on shaving gel?
Or consider radio talk show host Laura Schlessinger, whose show is called the Dr. Laura Program. Schlessinger’s show consists mostly of people calling in and asking for advice about romantic relationships, parenting, and other interpersonal issues. One would assume, therefore, that she is a psychiatrist (that is, an MD) or that she has a PhD in clinical psychology or counseling. She doesn’t. She has a PhD from Columbia in physiology. Her dissertation concerned the effects of insulin on lab rats.16 She did further her training in counseling at the University of Southern California,17 but she’s “Doctor” Laura because of her training in physiology. If being called “doctor” is supposed to give her authority as a counselor, there’s a mismatch between the credential and the authority we’re according her. We are in error.
This error in judging whether someone’s background is relevant explains our feeling that something was amiss when a university official asserted that Bernhard Dohrmann’s criminal past was “totally immaterial.” Dohrmann’s past might well be considered immaterial if, for example, I were thinking about buying the house across the street from him. Does his record of shady business deals predict that he’ll be a bad neighbor? Probably not. But if his criminal record were one of harassing neighbors, obviously that would be relevant. It’s little wonder that people thought Dohrmann’s track record in business should have been considered as the university contemplated entering a business agreement with him.
What do we do when two people who seem to be equally good authorities on a subject disagree (Exhibit 6.3)?
Of the four problems I’ll describe, this is probably the most prevalent in education research, and it’s understandable why that’s the case. There will be disagreement among authorities when there is not a reasonably successful scientific model of a phenomenon. If you ask one hundred cognitive psychologists, “What makes people creative?” you’re going to get a lot of different answers. Even if the hundred psychologists you ask are really top-flight scientists, they won’t agree, because creativity is poorly understood. If, in contrast, you asked the same hundred psychologists, “When you look at an object, how do you know how far away it is?” you’ll get much higher agreement. It’s a well-studied problem, and we know a lot about how that process works. The questions that educators care about are more often similar to the creativity question than to the distance question. That’s why education researchers, even authoritative experts, often disagree.
Small wonder that many teachers are suspicious of education research, and that it seldom influences their teaching.18 Part of this suspicion comes from a sense that researchers reduce everything to things that are easy to measure and, in so doing, miss most of the rich texture of the classroom.19 My own experience in talking to teachers indicates that there is another factor: a sense that different people make different claims about what “the research shows.” Just as statistical legerdemain can make data appear to support any conclusion, so too can “the research” be shape-shifted as one sees fit. One can hardly blame teachers, who are in the classroom every day, observing firsthand what works and what doesn’t, for not changing their practice on the basis of what looks like foolery.
• • •
Our purpose thus far has been to list the ways that an argument from authority—trusting that a conclusion is scientifically supported because a knowledgeable person said that it is scientifically supported—can go wrong. We’ve examined four ways (Exhibit 6.4).
It sounds as though we’re working up a good head of steam toward the conclusion that “you can’t believe something just because an authority tells you it’s so!” The Royal Society of London—one of the world’s oldest scientific societies, dating to 1660—seems to have gotten it right in its motto, offered as an epigraph to this chapter: “Take nobody’s word for it.” That seems to be in the spirit of this book, the point of which is to allow you to judge the merits of scientific research yourself. But dismissing authority can’t be done quite so quickly.
Each of us trusts authorities all the time. On what basis do I evaluate the advice dispensed by my doctor? Or my electrician, or my accountant? I trust them exactly because they have relevant training; they are credentialed. Believing them because they are authorities is not just a matter of convenience for me. I can’t do otherwise but trust them.20 Naturally, this trust doesn’t always benefit us. We’ve all had that uneasy feeling of wondering whether our doctor really knows what he’s doing. But for the most part, the trust seems to work out. If so, it must be that the four problems listed in Exhibit 6.4 are generally absent when we trust the authority of our doctor or electrician. Why?
There seem to be three crucial differences between my doctor and an education researcher. First, when it comes to doctors, plumbers, and the other professionals whom I trust, I don’t have to do the vetting. I believe, with some justification, that a license to practice the profession is meaningful. Hence, the first problem noted in Exhibit 6.4 is solved: someone credentialed to be knowledgeable probably is.
Further, the professionals in these fields decide whether a particular subfield requires further training or a separate license. Auto mechanics may become certified to work on particular makes of cars. Any physician is somewhat knowledgeable about heart problems, but if there is a serious issue, the patient is sent to a cardiac specialist. Hence, problem 3 (Exhibit 6.4) is largely solved: experts are reluctant to take positions beyond their expertise because there are acknowledged specialists.
Another important difference between education research and fields with trustworthy experts is that we believe there is a settled truth in these other fields. For example, last week I called an electrician to diagnose and repair a problem: the lights in my living room kept flickering. It didn’t even occur to me that there might be two or three opposing schools of thought on how to solve this problem. As I consider it now, I recognize that there might be more than one way to fix it, but I expect that different electricians would acknowledge that even though they have their own favorite method, the other methods are at least okay. Hence, the fourth problem (Exhibit 6.4) doesn’t come up. In education, experts think that other experts’ methods are terrible, are likely to damage children, and so on.
The third difference between education researchers and doctors or accountants concerns the role of the consumer in fixing the problem. In fields with acknowledged experts, practitioners try mightily to minimize our contribution to solving the problem. My auto mechanic doesn’t invite me to turn a few screws as he’s fixing my car, or to chip in my opinions about what to do next. He (accurately) figures I’ll be more of a nuisance than anything else. So too, my physician tolerantly answers my questions, but doesn’t volunteer more detail than I request. In both cases, their main message to me is, “To keep things running smoothly, you do exactly as I say.” My job is to execute their instructions, whether it’s changing the oil more frequently or getting more exercise. We have an unspoken understanding that my ability to understand why I’m to do these things is limited. Sure, I may ask questions and try to understand things better, and on occasion my questions even prompt my doctor to make some small changes. But I always yield to his authority. I would never think of using his diagnosis as a springboard for a homegrown plan of care.
This arrangement does not hold in education. Teachers and parents are not willing simply to do what education researchers tell them. Researchers don’t have that kind of credibility, and they don’t deserve it. In consequence, parents and teachers interpret what education researchers suggest, and sometimes their interpretation is not in keeping with what the researcher believes the science supports, as we described in the case of Gardner’s theory of multiple intelligences. That’s the source of the second problem we discussed (Exhibit 6.4).
In sum, we trust authorities when (1) a reliable licensing body certifies their expertise; (2) there is a settled truth in the field on which experts agree; (3) the settled truth allows experts to diagnose problems accurately and to prescribe remedies that work in most situations and that do not require creativity or skill from the nonexpert. So where do we stand in education? None of the three conditions have been met.
As noted, there is not a licensing body that testifies to the skill of education researchers. The closest you can come is to look for academic degrees or full-time employment as a researcher. I’ve argued that these credentials are not meaningless, but neither are they terribly reliable.
As for education research yielding a “settled truth,” it is fairly obvious that we’re not there; researchers don’t have a corpus of knowledge that is universally agreed on even for limited education goals, such as “How much emphasis should be devoted to phonics in the teaching of reading?”
In fact, the problem is still worse. Education researchers often don’t even agree on first principles of how research ought to be done. For example, the National Mathematics Advisory Panel was created at the request of President George W. Bush, and set the task of summarizing what is known about how kids learn mathematics. The panel was composed of nineteen eminent researchers, who wrote a report, published in 2008.21 Before the year was out, there was a special issue of the flagship journal of the American Educational Research Association devoted to critiques of the report.22 The thirteen articles in this issue centered on two themes: claims that the members of the panel adopted too narrow a view of what math education should contain, and claims that the panelists adopted too narrow a view of what was acceptable research. Education researchers do not agree on the fundamentals of research, so it’s difficult to find anyone who all education researchers would agree is an authority.
There is a high-profile attempt to solve the problem of authority in education. Called the What Works Clearinghouse (WWC), it was created in 2002 through the Department of Education, with the goal of sifting through the research dross and providing polished summaries of the research gems.b The WWC focuses on interventions (for example, curricula) rather than education theory. The idea is to summarize studies that have evaluated a particular reading program, dropout prevention program, and so on. Researchers employed by the WWC set high standards for what research is considered worthy of inclusion, so that consumers will know that the summaries they read are based on high-quality science.
As I write almost a decade into the project, it’s hard to find people who think it has been a smashing success. Complaints have often focused on the standards set by the WWC.23 In an effort to be rigorous, the WWC considers only certain types of experiments, which arguably limits the perspective of reviewers, and the WWC sets stringent quality criteria so that few studies end up making the grade.
• • •
So what’s the final word on authority? We began by making explicit the logic behind an argument from authority (Exhibit 6.1, repeated here as Exhibit 6.5).
In the course of this chapter, we’ve seen that there are multiple reasons to doubt proposition A and proposition B. We are thus encouraged, in Jefferson’s lively epigraph to this chapter, to beard every authority who stands in our way. How are we to be bold in our pursuit of knowledge? If we can’t take an authority’s word for it, how do we evaluate the strength of evidence ourselves? That’s the topic of Chapter Seven.
a Note that we’re using the term authority to mean “knowledge” or “expertise” (“She’s an authority on the subject”) rather than to mean “power” (“His subversive activities got him in trouble with the authorities”).
b Another, smaller-scale effort is the Best Evidence Encyclopedia (www.bestevidence.org), run by the Johns Hopkins University School of Education. Researchers there do not write research summaries themselves, but rather seek out high-quality research summaries that have been published elsewhere, and put them into more reader-friendly language.
Notes
1. Lazarus, D. (2002, March 10). If nothing else, man with past is persistent. San Francisco Chronicle. Available online at http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2002/03/10/BU139492.DTL.
2. Dohrmann, B. J. (2005). Whole brain learning. Available online at http://www.superteaching.org/STMIND.htm.
3. Hannah, G. (2002, April 28). Bernhard Dohrmann. Huntsville (AL) Times, p. A9.
4. This figure is according to the Super Teaching purchase order: http://superteaching.org/CEO_ST_purchase_order_v4.pdf.
5. Mclaughlin, B. (2008, October 7). Learning at the speed of thought. Huntsville (AL) Times, p. 1A.
6. Ramhold, J. (2010, April 14). University dissolves “Super Teaching” partnership. The Exponent. http://exponent.uah.edu/?p=2538 (accessed July 17, 2011; this Web page is no longer available).
7. This blog entry is no longer available from the Flashpoint blog Web site (http://www.flashpointblog.com).
8. Shavers, A. (2009, October 21). Super Teaching: Learning at the speed of con. The Exponent. http://exponent.uah.edu/?p=1570 (accessed July 17, 2011; this Web page is no longer available).
9. Kolowich, S. (2010, May 27). University had short attention span for “Super Teaching.” USA Today. http://www.usatoday.com/news/education/2010-05-27-IHE-Super-Teaching-U-Alabama27_ST_N.htm.
10. Hannah, G., & Lewin, G. S. (2002, April 28). “Can’t fail” international success system based here has its skeptics. Huntsville (AL) Times, p. A1.
11. Kolowich, 2010.
12. Hendel, J. (2011, June 28). Can a dog still earn an MBA? Fortune. Available online at http://management.fortune.cnn.com/2011/06/28/can-a-dog-still-earn-an-mba/?section=magazines_fortune.
13. Lagemann, E. C. (2000). An elusive science: The troubling history of education research. Chicago: University of Chicago Press, p. 232.
14. Levine, A. (2007). Educating researchers. Educating Schools Project. Available online at http://edschools.org/EducatingResearchers/index.htm.
15. Gardner (1999) sought to correct this mistaken application of his theory (and others) in his book Intelligence reframed: Multiple intelligences for the 21st century. New York: Basic Books.
16. Schlessinger, L. C. (1974). Effects of insulin on 3-O-methylglucose transport in isolated rat adipocytes. ProQuest Dissertations & Theses, http://proquest.umi.com/pqdweb?did=761334421&sid=1&Fmt=1&clientId=8772&RQT=309&VName=PQD.
17. Dr. Laura. (n.d.). http://www.drlaura.com/g/About-Dr.-Laura/273.html.
18. Hemsley-Brown, J., & Sharp, C. (2003). The use of research to improve professional practice: A systematic review of the literature. Oxford Review of Education, 29, 449–470.
19. Shkedi, A. (1998). Teachers’ attitudes towards research: A challenge for qualitative researchers. Qualitative Studies in Education, 11, 559–577.
20. Walton, D. (1997). Appeal to expert opinion: Arguments from authority. University Park: Pennsylvania University Press.
21. National Mathematics Advisory Panel. (2008). Foundations for success: The final report of the National Mathematics Advisory Panel. Washington, DC: U.S. Department of Education.
22. Kelly, A. E. (Ed.). (2008). Reflections on the US National Mathematics Advisory Panel Report [Special issue]. Educational Researcher, 37(9).
23. For example, Confrey, J. (2006). Comparing and contrasting the National Research Council Report On Evaluating Curricular Effectiveness with the What Works Clearinghouse Approach. Educational Evaluation and Policy Analysis, 28, 195–213.