CHAPTER FOUR
LAW SCHOOL MISMATCH
THOUGH THE EARLY RESEARCH on mismatch described in Chapter 3 was dramatic, it focused on very discrete phenomena in which “relative disadvantage” played an obvious, almost intuitive part. Law schools provide a very different window into mismatch. Because law graduates take bar exams, it is possible to see whether mismatch directly affects learning itself. And due to the breadth of data developed on law students and lawyers over the past generation, it is possible to look at the effects of legal education more comprehensively. One of us (Rick Sander) pioneered this work, and because his experience in studying mismatch was in many ways a personal journey, this chapter and the next are told from his perspective.
The first thing that struck me when I arrived as a junior member of the law faculty at UCLA in 1989 was how different it was from Northwestern, where I had received my law degree. During my years in graduate school, Chicago and its communities had passed through some dramatic events (some of them memorably recounted in Barack Obama’s memoir, Dreams from My Father), but the university and my overwhelmingly white classmates seemed largely sealed off from the city. UCLA Law School felt very different. Nearly half of the student body along with many of the faculty were nonwhite; student organizational life was vibrant, and many students spent their precious free time engaged with pro bono organizations in Los Angeles neighborhoods. Classroom discussions reflected the diversity of the students, though not in a particularly self-conscious way. Cross-racial interaction was ubiquitous and cross-racial friendships were common. After the racial tension of Chicago and the sequestration of Northwestern, UCLA seemed too good to be true. Of course, in a sense, it was. Like a hundred fictional travelers to new worlds that seemed at first to be utopias, I was gradually to discover that the law school had some disturbing hidden secrets.
One discovery was that race was closely linked to law school performance. Almost all classes other than seminars used anonymous grading, but after grades were turned in, professors could get a “matching sheet” that linked exam numbers to names. After my very first semester I was struck that my Hispanic, black, and American Indian students were mostly getting Cs in a class in which the median grade was a B-. The pattern repeated the next semester—including even students who had impressed me in class. Puzzled, I asked a senior colleague about the pattern. Oh yes, she replied, shaking her head. The minority students come in with weaker preparation. It was a tough problem.
But this answer puzzled me more. How could it be, I asked, that 80 percent of the black students had weaker preparation than 80 percent of the white students? Weren’t they all admitted through the same process? No, the colleague replied. Admissions were done separately for each racial group. And though there were lots of blacks in the pool who would shine at UCLA, they were generally attending Stanford, Harvard, and Yale.
This was intriguing. I wanted to find out more, and I soon became a sort of technical adviser to the admissions office, learning for the first time many of the sorts of basic facts about racial preferences that we discussed in Chapter Two. The law school’s admissions officers used an index (which I soon persuaded them should run from 0 to 1000) that combined information about an applicant’s LSAT score, her undergraduate grades, and the difficulty of her college. An applicant’s index score determined whether she was a presumptive admit, a presumptive reject, or a borderline case meriting closer scrutiny. But these thresholds were completely different for each racial group. Whites would be presumptively admitted with an index of 820 and presumptively rejected with an index of 760. For blacks and American Indians, the corresponding numbers were 620 (admit) and 550 (reject).
Most remarkable to me was the role of law students in this process. Through a series of protests and confrontations over the years, including sit-ins, student takeovers of the records office, and smashed windows, the administration and the major student groups had reached an accommodation. The Asian, black, and Hispanic student organizations would each be allowed to appoint students to race-based admissions committees. The students would review their same-race applicants and make recommendations; usually the students favored lower-index applicants with more “authentic” or activist backgrounds. Faculty and admissions administrators would try to pay reasonable heed to these suggestions while screening out the academically weakest candidates.
Once I understood the system, what I observed in the classroom no longer surprised me. The reason I was seeing relatively little racial overlap in academic performance in the classroom was that there was virtually no overlap in academic indices across racial groups. UCLA was achieving an extraordinary level of racial integration in the classrooms through a racially segregated admissions process in the basement.
And although the admissions process was generally unknown to students, the performance gap in the classrooms seemed to be well known. Once, when a student told me about his course load, I observed that he was in a lot of tough classes graded on mandatory curves. That was true, he responded, but a couple of them were “safeties.” I asked him what that meant. A little embarrassed, he said that was a term for a class that had enough black and Hispanic students to absorb the low grades on the curve. His remark was breathtakingly cynical—and an oversimplification too. (The correlation between race and grades was by no means perfect.) But it was hard to blame him, and I gradually learned that many students thought in these terms.
UCLA’s racial diversity, it was clear, had both wonderful and chilling elements. But I was quite prepared to believe that the good predominated. My colleagues had the very best of intentions. We had produced some outstandingly successful minority alumni. There was that magical air of successful integration. And, as several colleagues had told me, the underlying trendlines were producing steadily strong minority candidates. How could we make it better?
Alongside its diversity program, the law school had tinkered with a wide variety of academic support programs to help students who were struggling. I had taught in one of these programs and was impressed with the work of Kris Knaplund, the charismatic coordinator of the law school’s academic support. From a series of early conversations we embarked on a three-year (1992–1995) study of the school’s many different experiments in academic support. We assembled a database that covered some three thousand recent law students, with information on their academic index, the courses they took, their performance at the law school and on the bar, and, of course, what type of academic support (if any) they had received. We interviewed the various faculty who had been involved in academic support and read the comparatively scant literature evaluating other programs.
Our findings were clear. Students doing poorly in their first year of law school were often those admitted with large racial preferences. They tended to struggle at the very basic level of understanding what professors were trying to accomplish in the classroom. If a professor, for example, posed a hypothetical that slightly changed the facts of a case, these students might have a good idea of the answer but no idea why the question was being asked. Consequently, academic support that consisted of merely reviewing material was singularly ineffective. The approach that worked, at least to some degree and for some students, was to “decode” the classroom—that is, explain in detail, and with examples, exactly why professors were asking the questions they did, what they were trying to teach, and what they expected students to deduce on their own.
Academic support programs that didn’t build on these insights could be very resource intensive (e.g., individual tutoring of students by faculty) and yet essentially useless. Programs that were well designed made a real difference. But even here, we found no cure-all. The most effective academic support programs at UCLA had a lasting, positive influence on many students’ grades, but the effect was small and we were uncertain that it carried over to success on the bar exam.
According to the bar’s published reports, UCLA’s law graduates typically had a first-time bar passage rate of around 85 percent, and everyone dreaded the idea of being part of the 15 percent who failed. Failure on the bar could mean getting fired from one’s job (especially if one failed on the second attempt) or not getting a job if one didn’t have one lined up by graduation time. Retaking the bar was expensive and time consuming. And a large percentage of those who failed the first time never did pass.
Consequently, nearly every UCLA graduate signed up for a “bar review” course. These were privately run, very expensive, eight-week courses that covered all the major topics on the bar, administered practice exams, and offered insider tips on how to pass the bar. So powerful was the faith in these bar review courses that the law school’s academic calendar ran on a different schedule from the rest of the university so that graduates could, upon receiving their diplomas, immediately dive into the review class. It was—and is—a very strange system that exists throughout legal education and seems built on the idea that law school provides no useful preparation for the bar exam.
But, as shown in Figure 4.1, when I found data linking law school grades with bar exam results for UCLA students, quite a different story appeared.
Law school performance was by far the best predictor of bar exam outcomes. If you were in the top third of the class, you had more than a 99 percent chance of passing the bar; if you were in the bottom tenth of the class, you had only a one-in-four chance of passing.
 
FIGURE 4.1. Grades and Bar Passage
Source: UCLA Records Office, “Bar results per GPA Range, First-Time Takers,” July 1998 and 1999. Values shows are an average of the two years.
016
The obvious implication of this data was that law school did matter to bar performance (and, as we shall see later, to one’s actual success as a lawyer). A secondary implication was that bar review courses were greatly overrated—a conjecture that was proven several years later in a study Steven Klein and Roger Bolus did for the Texas Bar, in which they found that completely skipping any bar review course would affect bar passage for only two or three graduates out of every hundred.
There were, of course, innumerable other factors that necessarily influenced how one did on the bar exam: the courses one took, how hard one studied, whether one could stay focused during three grueling days of examination, and so forth. But all of these put together appeared to be less important than law school grades.
Such an unexpectedly high correlation fascinates social scientists like me. It naturally suggested that law schools and law students had been overlooking something important about the way things work.
In one sense this correlation was not so mysterious. If one thought of the bar exam as, essentially, a gigantic law school exam, wouldn’t one expect its results to correlate with the results of all the little exams one took in law school? If one was good at writing essays and answering multiple choice questions about legal topics, wouldn’t that simply translate from one’s law exams to one’s bar exams?
This was certainly true, but there was more to it than that. The California Bar Exam had introduced in the 1980s a one-day component known as the “performance exam” that was not at all like any conventional law school exam but rather like the sort of practical task a young lawyer might be given by her boss. Students sat down in the exam room and opened a packet of materials from a mythical client with a legal problem. The packet included an introductory memo, various types of evidence, relevant cases, forms, and statutes. The candidate had to sort this all out, figure out a strategy for the client, and explain it in a memo.
It turned out that law school grades predicted scores on the performance exam virtually as well as they predicted scores on the more conventional parts of the bar exam. If nothing else, this meant that law school grades (LGPA) were a very powerful predictor both of one’s ability to demonstrate an understanding of a wide array of legal issues and one’s capacity to assemble legal materials and do conventional legal analysis.
Why would this be? One possibility was that a student’s character and personality traits—such as drive, determination, self-discipline, memory, and hard work—are reflected both in LGPA and in bar exam performance. But if this was the main explanation, then why were undergraduate grades nearly worthless as a predictor of bar exam results?
The only other possible explanation—and the more likely one—seemed to be that law schools taught and tested a wide array of legal knowledge and skills that were also tested on the bar exam and that some students learned much more than others. Students who consistently got As or even high Bs in their law school classes were developing a much more powerful and relevant skill set than those who got low Bs or Cs. That would explain both the general bar results and the performance exam results. It might also explain why—after long experience—almost all big law firms looking for the students who would make the best lawyers and almost all judges looking for the best law clerks placed so much weight on the LGPA of law students applying to them for jobs. (Many firms, I knew, had informal rules that they would hire UCLA graduates only if their LGPAs were above a certain threshold.)
A problem with this hypothesis was the conventional wisdom among many law students and even professors that good law school grades had nothing to do with whether one would be a good lawyer. Most students see law school as a “credentialing” exercise: jump through the requisite number of hoops and get the diploma with the all-important name of the school on it. And many professors feel that a fair amount of law school teaching is too theoretical (especially at more-elite schools) to help students pass the bar or practice law and that grading (at least grading by other professors) is rather arbitrary. Indeed, in 1995, just as I was discovering the predictive power of LGPA, my UCLA colleagues were voting to abolish numerical grades as being artificially precise and to greatly reduce the number of students with low grades by raising the median grade to B.
The official disdain of the school for its own grading system was even more pronounced in its dealings with employers. Brushing aside the law firms’ far better vantage point for assessing the importance of grades, UCLA, like most other law schools, refused to reveal the class rank of its students, which employers considered the single-most important piece of information. And UCLA, like most other elite law schools, stipulated that if an employer wished to be provided an on-campus office for interviewing students for jobs, it would have to interview students selected by UCLA itself through a grade-blind lottery. These egalitarian gestures reflected the school’s official ethos that winning admission to the school would always be more important than doing well there.
017
By 1997 I had observed the following:
• About half of UCLA Law School’s black students were ending up in the bottom tenth of the class; about half of the Hispanic students were ending up in the bottom fifth. Their poor performance seemed to be mostly due to the fact that they were arriving at the school with much lower credentials (and implicitly weaker academic preparation) than most of their classmates.
• Low grades meant poor bar performance. The school’s first-time bar passage rates were about 50 percent for blacks, 70 percent for Hispanics, and 90 percent for whites.
• Academic support could slightly improve outcomes for our students, but it had at best only a marginal effect on bar passage rates.
• Yet our minority students were plainly smart, and similarly qualified students attending less elite law schools in Southern California seemed to have much better outcomes. That is, students at other law schools who had academic indices similar to UCLA blacks passed the bar 75 to 80 percent of the time. Students at other schools who had academic indices similar to UCLA Hispanics passed the bar 80 to 85 percent of the time.
Or so it seemed, from the aggregate level data available for entire schools. Without good data comparing individuals attending different schools, I could not really test ideas that I did not yet quite recognize as a mismatch hypothesis.
In 1998 I was invited to a conference organized by the American Bar Foundation to plan an ambitious, unprecedented study of a national sample of lawyers through the early stages of their careers. The proposed study was funded and became known as the “After the JD” study (AJD); I became one of its leaders from 1999 through 2004. The project put me near the center of the legal education community.
There is more on the AJD study below, but of more immediate significance was that one of my AJD colleagues helped me to get access to a massive dataset that documented such matters as racial gaps in bar exam passage. It was known as the Bar Passage Study (BPS), and it had been developed by the Law School Admissions Council (LSAC), the nonprofit organization that administered the LSAT exam. In the late 1980s LSAC had commissioned a major investigation of bar passage rates, primarily aimed at understanding whether (as was rumored) blacks and Hispanics nationally had poor bar passage rates, and if so, whether bar exams were somehow biased against minorities. LSAC was able to enlist nearly 90 percent of all accredited law schools in the BPS, and those schools, in turn, persuaded some 80 percent of their students to participate. A total of more than twenty-seven thousand students starting law school in the fall of 1991 completed lengthy questionnaires and gave LSAC permission to track their performance in school and later on the bar exam. A subsample of several thousand students also completed follow-up questionnaires in 1992, 1993, and 1994. The BPS itself continued to collect data until 1997.
Despite this broad involvement and the massive cost of the BPS, by 2001 almost nothing had been heard of its results. I attended a presentation at which the study’s leader, Dr. Linda Wightman, flashed a series of slides with not-very-revealing information on them. Yes, she announced, there was a racial gap in bar passage rates, and it was worrisome, but it was not as large as some had feared. LSAC issued a follow-up report, which was also remarkably bland and opaque.
Surely there had to be much more in a dataset reported to have cost $5 million that spanned all of legal education! I dove into it, sorting through the hundreds of variables on tens of thousands of law students, with a growing sense of disappointment. Wightman and the other LSAC administrators had suppressed the identity of every individual law school in the study. Law schools had been put into clusters; state bar results had been grouped into regions. And the bar data told only whether a candidate had passed or failed the bar exam; there was no specific information on scores either on the exam as a whole or on its various component parts. I will return to the significance of these omissions. For now, suffice it to say that I felt like an art student who journeys to Florence to see Michelangelo’s David only to find a fuzzy two-dimensional photo in its place.
Despite its weaknesses, the BPS had enough information to answer many questions. For example, if one set aside the historically black schools, the BPS data showed that nearly every American law school was using very large racial preferences (a finding since confirmed with other data). UCLA’s racial preferences were by no means unusual. The data also vividly illuminated the “cascade effect” illustrated in Chapter Two. Indeed, Dr. Wightman was soon to publish an article showing that the vast majority of minority law students would still be admitted to a selective law school (albeit a less elite one) if racial preferences suddenly disappeared.
These pervasive racial preferences had the same effect on performance nationally as I had observed at UCLA. The median black at all of the schools using substantial racial preferences had an LGPA that placed her within only the sixth percentile of the white students. In other words, 94 percent of whites were getting better grades than the median black. Conversely, only about 10 percent of all black students were making it into the top half of their classes.
Still—and this is an important point—low grades from blacks or other students receiving preferences did not prove by themselves that these students were mismatched in the way we discussed in Chapter Three. The fact that a student might get Cs at Harvard but would get Bs at a twentieth-ranked school (e.g., George Washington University) or As at a fiftieth-ranked school (e.g., American University) does not necessarily mean that the student at Harvard is learning less or will end up being a less effective lawyer than the student with higher grades at a less-elite school. Being a little fish in a big pond is not, a priori, a bad thing. It might be harmful or it might be helpful, or it might make no difference at all to one’s medium- and long-term success.
But with the BPS data, one might be able to assess the actual effect (if any) of racial preferences on learning by using bar exam results. Legal education was one of those rare places in higher education where the graduates of all schools had their knowledge tested in a fairly uniform, systematic way. And the BPS did record bar exam outcomes, though there were those data blurring problems mentioned earlier. Since the BPS only had bar outcomes, not scores, first-time bar passage was the best available way to compare learning outcomes across law schools. Because one could not identify specific schools in the BPS, one could not directly measure each student’s degree of potential mismatch.
However, there was another approach: to use the race of students as a proxy for admissions preferences. Unlike colleges, law schools granted very few admissions preferences for reasons other than race; if one excluded the historically black schools, then the vast majority of blacks in the BPS data received large admissions preferences, and the vast majority of whites did not. I could, therefore, use black students as a collective proxy for preferences, generally low grades in law school, and hypothetical vulnerability to mismatch. I could use white students as a collective proxy for the absence of these things. Given the type of analysis I planned to do, the fact that these were inexact categories with some overlap (i.e., there were a fair number of successful black students and unsuccessful white ones) was actually an advantage because the overlap would help to assure me that the same statistical patterns applied to both groups.
Taking the twenty-odd thousand white and black students in the BPS, I used regression analysis to see how well the prelaw academic indices of students (e.g., their LSAT scores and college grades) predicted their bar outcomes. The associations were strong but not overwhelming. Of particular interest, though, was the effect of “race” on outcomes:
Blacks were much less likely to pass the bar exam than were whites with the same academic index coming out of college.
So clearly something was depressing black performance. But what could it be? Why would black and white students who came out of similar colleges with very similar academic qualifications end up with dramatically different bar passage rates? The potential differences in social background, high school education, or reactions to the pressures of college—all of these would plausibly be reflected in the academic index itself. Why, controlling for academic index, would blacks be doing so poorly on the bar? This turns out to be a fundamental question to which no plausible answer other than mismatch theory has ever been suggested.
Then I ran a second regression, adding law school class rank into the analysis. (The BPS measure of law school grades was a relative measure—that is, a measure of class rank—that mooted differences in grading scales across schools.) This second equation predicted bar passage much more powerfully (recall my earlier finding that law school grades at UCLA strongly predicted bar outcomes). This meant that, in general, relative performance in law school was the single-best predictor of bar results. Just as importantly, in the second regression the “race” effect disappeared. When one controlled for law school class rank, blacks and whites had the same success rates on the bar.
This second analysis had enormous implications. As in the case of SATs, there had been rumblings that blacks did badly on bar exams because the test was biased. My regression result—that when one controlled for law school grades, blacks did just as well as whites—matched the results of Dr. Wightman (using the same data) as well as the results of highly respected psychometricians who had done analyses with more detailed data from individual states.
In other words, the reason why blacks “underperformed” on the bar relative to whites with similar pre-law school credentials did not lie in the exam itself; it somehow lay in the law schools. In particular, the bad grades that the vast majority of blacks were getting in law school were foreshadowing bad performance on the bar.
These results were exactly what would be predicted by mismatch theory, though it is a subtle point that even some specialists have had trouble grasping. Consider two hypothetical students in New York, one black and one white, who finish college with identical records—say a 3.3 college GPA and a 160 on the LSAT. Those numbers are good but not outstanding. Both apply to law school. The white student is admitted to a thirtieth-ranked school (say, Fordham Law School) whereas the black student is admitted to Columbia, ranked fifth. The white student ends up with grades that put her in the middle of her class; the black student, facing much stiffer competition at Columbia, ends up with low grades that put him in the bottom tenth of his class. Now the mismatch effect: suppose the black student’s low grades signify not only that he learned less than his Columbia classmates, but less than his counterpart at Fordham. And consequently, the black Columbia student is three times as likely to fail the New York bar as his white Fordham counterpart.
Now suppose this illustration typifies the operation of racial preferences in law schools. Then one would expect that (a) black students would do much worse on the bar exam than white students with the same LSAT scores and college grades because the black students are being “preferenced” up into much more competitive schools, where they learn less (hence the results of my first regression), and (b) when one adjusts for the mismatch these preferences cause (by controlling for law school class rank), the racial differences in bar exam performance disappear (as shown by my second regression).
So the BPS analyses fit the mismatch story exactly. Though weaknesses in the data made it hard to be precise, the analyses implied that mismatch in law schools was roughly doubling the rate at which blacks failed bar exams.
There was, however, one alternative theory. What if law school was somehow undermining black performance, independent of mismatch? What if, for example, the use of the harsh Socratic method of interrogating first-year law students came off to black students as an implicitly racist pedagogy and that this undermined their performance?
This argument was pretty implausible—law schools prided themselves on their racial sensitivity—but there was, in any case, a way to test this hypothesis: do a third regression to determine whether it was preferences, or race, that produced lower law school grades among blacks. One couldn’t do such an analysis with the BPS data because of its lack of granularity. But the LSAC had done several studies with other data sources, and I was able to do an even better analysis—controlling not only for the college grades and LSAT scores of undergraduates but also the quality of college they attended—with data I had collected with several colleagues from twenty law schools in the mid-1990s. All of these analyses showed very much the same thing: Preferences explained at least 80 percent of the low grades blacks received. Race probably explained nothing, but, in any case, no more than 10 to 20 percent of the grade gap.
The mystery was apparently solved. Blacks graduating from law school indisputably had a terrible time passing the bar. Ever since the advent of racial preferences, there had been an undercurrent of muttering about the large racial disparities in bar passage. Were blacks somehow being discriminated against? Was there something wrong with them? But now it appeared that mismatch could almost entirely explain this phenomenon. Blacks were doing badly on the bar not because of test bias or because of invisible weaknesses they brought to law school or because law schools were somehow unwelcoming. They were doing badly, it turns out, because the law schools were killing them with kindness by extending admissions preferences (and often scholarships to boot) that systematically catapulted blacks into schools where they were very likely not only to get bad grades but also actually have trouble learning. It was thus about race only to the extent that schools based large admissions preferences on that factor.
As my work progressed, many different methods of testing and exploring the mismatch phenomenon in law schools arose. For example, one of my students wondered whether older white students (whom law schools often gave admissions preferences to in pursuit of a different kind of diversity) might also encounter mismatch problems. The BPS allowed us to identify the age of students and confirm that, indeed, a large percentage of older white students were attending schools with credentials a good deal lower than their classmates. Yes, they had disproportionate trouble on the bar. And yes, when we controlled for mismatch, the difference in performance disappeared. Poor outcomes were not a function of age, race, or any other group characteristic—it was about large preferences.
The larger the preferences were, the more severe the mismatch effect would be. My rough estimate was that in the early 1990s, without preferences, about four in five blacks entering law school would pass the bar and become lawyers; with preferences, a large majority of blacks were entering more competitive law schools and only three in five would become lawyers. By 2003, when I was reaching these conclusions, bar exams had become harder to pass in most of the United States. It appeared that for contemporary black law students starting law school, about 47 percent were becoming lawyers (and one-third of these were having to take the bar two or more times). By contrast, 83 percent of entering white students were becoming lawyers, and only one in twenty of these whites required more than one attempt on the bar. Without mismatch, I estimated that the black success rate might be as high as 70 percent. Indeed, the effect was big enough to suggest at least a strong possibility that racial preferences and their resulting mismatch were reducing the number of black lawyers.
It wasn’t hard to see why this might be the case. Even though nearly all schools used racial preferences, most of these preferences were simply (through the cascade effect) moving around students who were clearly qualified for some law school. An analysis by LSAC’s Dr. Wightman suggested that in 2001 all law school racial preferences increased the pool of black students admitted to the nation’s law schools as a group by only 14 percent. These 14 percent—who could not get into any law school but for racial preferences—were particularly weak students. Indeed, the evidence suggested that less than a third of them were becoming lawyers, which meant that the loss of these students would have a minimal impact upon the number of lawyers passing the bar each year—even assuming hypothetically that the other 86 percent would continue to have the same aggregate bar passage rate as under the current preference-ridden regime.
But that hypothetical assumption, of course, considerably understates the potential rate of black success: Mismatch appeared to reduce the other 86 percent’s chances of becoming lawyers by nearly a third. Admittedly, these were estimates; nonetheless, the negative effect of mismatch on the success of black law students was clearly much larger than the positive effect of racial preferences in expanding the pool of blacks admitted into law schools.
And setting aside the aggregate effects, the “pool expanding” effects of law school racial preferences combined with the mismatch effects were causing a great many minority law students to spend many years and large sums of money pursuing a career that would never materialize. If these results were correct, then law school mismatch was damaging thousands of lives each year.
These findings did not come all at once; I would puzzle over the data, set it aside for a few months, and come back to it again, trying to think through the idea of mismatch and how it might be tested. And as we shall see, there were better ways of using the BPS data that remained to be discovered by others. But by June 2003—ironically, just as the Supreme Court was issuing two big decisions about admissions preferences—the broad conclusions I have sketched were all fairly clear.
There was still one central issue that I wanted to understand: how did law school preferences affect blacks on the job market? Obviously, preferences were harmful for law graduates who never passed the bar. But it was very widely believed that legal employers (especially elite law firms) cared greatly about hiring graduates from the best law schools. Increasing the number of blacks at top-ten or top-twenty law schools was thus seen as essential to giving the strongest blacks better opportunities and helping to integrate not just the legal profession but especially its upper reaches.
Still, if it was true that low grades indicated less learning and that law school learning was related to being a good lawyer, then mismatch would undermine legal skills, and employers would soon catch on.
As it happened, the best place to get some insight into these questions was the AJD, the project that I had helped to coordinate since 1999. By 2003 we had gathered data on a national sample of nearly five thousand recently minted lawyers, including information on their academic indices, the law school they attended, their law school grades, and their jobs as lawyers.
With the AJD, it was possible to compare similar law students attending different schools to measure what effect their degrees—and their grades—had on their job prospects, both immediately after law school and over the first few years of their legal careers. My analysis of the data found that law school eliteness did have a positive effect on early career earnings, but the effect was small. Law school performance was generally as important as and often more important than law school eliteness. A student who went to thirtieth-ranked Fordham and ended up in the top fifth of her class had jobs and earnings very similar to a student who went to fifth-ranked, much more competitive Columbia and earned grades that put her slightly below the middle of the class. I found that in most cases like this, the Fordham student had the edge in the job market. This suggested that although firms cared about the eliteness of one’s law school, it was largely seen as merely a signal—to be weighed with other signals, like grades—for identifying intellectual horsepower and strong legal training.
Why, then, were so many law professors convinced that school eliteness was crucial to career success, when I was finding it played a secondary role? I would later explore this question further and find a simple explanation. For much of the twentieth century elite law firms had indeed focused their hiring only on elite law schools. Social background during that time was nearly as important to the relatively small and stuffy law firms as intellectual ability, and these firms had almost a clubby relationship with the elite schools. However, all this had started to change in the 1960s. Many forms of legal activity, from tort litigation to environmental regulation, took off during the 1960s; the number of lawyers as well as the size and profitability of the elite firms began to grow with astonishing speed. The gentlemanly world of old “white shoe” law firms gave way to the bruising competition of “big law.” Old inhibitions at law firms about hiring Jews, women, and blacks began to slip away at just this time, and the need for more lawyers caused the firms to start considering candidates from a much broader range of law schools.
These factors and others produced a revolution in hiring by legal employers—away from a focus on school and social breeding and toward algorithms that weighed law school eliteness against law school performance. In the period from 1950 to 1965, 91 percent of the young lawyers at elite New York law firms came from the top ten law schools (Figure 4.2). By 2000 that percentage had fallen to 39 percent. And nationwide in 2000 the largest firms hired only 22 percent of their associates from the top ten law schools.
Somehow, this fundamental paradigm shift had escaped the attention of the elite schools themselves, many of which were led by professors who had come of age long before and who, in any case, tended to think of their institutions as the center of the universe. And so they accepted uncritically the idea that admission via large racial preference to their schools would give blacks and other underrepresented minorities huge advantages in their future careers.
 
FIGURE 4.2. Many Roads to Success
Sources: Statistics for 1950–1965 are derived from the 1965 Martindale-Hubbell listings for elite New York firms that went on to be members of the Am Law 100 in 2001. We counted each listed lawyer who had graduated from law school in 1950 or later. These 2002 figures are based on AJD data. The same set of schools are classified as “Top Ten,” “Top Twenty,” and so on, for all three columns. “Top Three” includes Harvard, Yale, and Columbia, the latter because it has long been a principal feeder of New York elite firms.
018
In Chapter Six we shall return to the question of whether elite schools conferred any long-term career advantage upon students receiving preferences, but my analysis of AJD data pretty much demolished the idea that there was any short-term advantage. Employers had started paying more attention to grades right around the time that racial preferences were coming into wide use at law schools (the late 1960s and early 1970s), and employers were, therefore, well aware that preference recipients were generally not performing well in school. Indeed, my analysis found that although most big legal employers used racial preferences to diversify, they paid particularly close attention to black candidates’ grades. Just as I was finishing these analyses, a small factoid provided an “aha!” capstone to this line of reasoning. It seemed that among all law schools, Harvard University attracted the largest number of law firms to its campus to interview students—no surprise there. But the number-two school was low-ranked Howard University—the historically black law school in Washington, DC—that produced more high-GPA black law graduates than any other school.
The stories the many datasets told merged into a larger story that seemed, at every point, to overturn the conventional wisdom. A great deal remained conjectural, but four basic findings struck me as indisputable. First, law schools were using very large, race-based preferences, and even nonelite schools were effectively constrained to use these preferences because of the cascade effect. Second, many and perhaps most black law students (and, to a lesser extent, Hispanics) were saddled with lousy grades, mediocre graduation rates, and terrible prospects for passing the bar. Third, the mismatch effect was the most plausible explanation for the high failure rates on the bar, and confirmation of that hypothesis would mean that affirmative action was systematically lowering the learning of its recipients. And fourth, the major supposed benefits of the affirmative action system—that it boosted the number of minority lawyers and helped propel them to successful careers—were at best unproven and at worst completely wrong.
By the fall of 2003 the various elements of this research were complete. I had decided that the parts of the story were so interconnected that it should all go into one very long article. I began to show drafts to a few colleagues. Reactions were immoderate. “This is 9/11 for law schools,” observed one prominent scholar, somewhat enigmatically, over the phone. “The best law review article I’ve ever read,” commented a sociologist friend who taught at a law school. Another colleague invited me to her house after reading it. “It’s very powerful,” she said. “I’m so glad you have tenure.”