4
HOW TO USE THE RANKINGS
—EXECUTIVE SUMMARY—
020
Schools are routinely ranked by various authorities (and dilettantes).
021
The rankings provide handy guides to the reputations of different programs,
but are subject to many qualifications.
022
Consult them, but do not rely on them.
023
Recognize their limitations as well as their strengths.
024
Devise your own rankings to suit your needs.
 
For years, the annual ranking put out by U.S. News & World Report has been the most important in the field. The decisions of applicants, recruiters, and even the law schools themselves are heavily influenced by the results of the annual U.S. News rankings. Consequently, this chapter devotes considerable attention to its strengths and weaknesses. In addition, it explores other rankings as well as data that can be used to rank law schools in various ways.

USING THE RANKINGS

The ranking of law schools is a very uncertain science. Organizations and individuals undertaking these rankings are confronted with daunting methodological problems. From an immense amount of information, a few factors must necessarily be singled out and calculated—all in order to provide “scores” that allow readers to differentiate among schools that have performed almost identically. For example, how important is it to have a library of five million volumes rather than six hundred thousand volumes, and how does that compare with having a student body LSAT average of 168 rather than 171? Is the school with five million volumes and a LSAT average of 168 better than the school with six hundred thousand and 171, respectively? Or is it equal to it, or worse? It is not obvious how the two schools should be compared, even when two relatively simple quantitative measures are employed. The problem is made infinitely more complicated when numerous other factors are considered, especially because many of these are inherently subjective rather than easily and objectively quantifiable.
Several bypasses are available to the ranking organizations. They can examine the opinions of those doing the hiring at major firms, taking the likely employers of JDs as the ultimate arbiters of worth. (To an extent this is correct, of course, insofar as JDs tend to view the value of their degree in large measure as a matter of what employment doors it opens.) Another possible shortcut is to examine the earnings of the graduates of each school, relying again on the market as the arbiter of the value of the JDs from the various schools. Unfortunately, these shortcuts also suffer from limitations. Some are due to the fact that an overall ranking for a school does not distinguish between how its private-practice graduates do and how its public-interest graduates fare. Nor does it take account of the fact that its graduates may do very well locally but not in another region (or country). Thus, if a school is rated highly because its graduates make a lot of money in private practice, but you intend to work as a public defender, this school may not boost your salary—or employment options—more than another school would.

SOME WARNINGS

Rankings are useful as a very rough guide to the reputation and quality of different programs. Most people take them far too seriously, however, when considering where to apply. It is inappropriate to take the latest U.S. News ranking and limit yourself to the top five schools in the list. The schools differ enough in their goals, programs, and atmospheres that a person who will be well served by one may be poorly served by another. To take an obvious example, a person who is determined to study tax law should probably be looking at Georgetown rather than Texas (unless he or she intends to practice in Texas). Both are top-tier schools, but their missions and offerings are quite different. Georgetown offers dozens of tax courses and an LLM in tax, whereas Texas offers only a handful of tax courses.
Chapter 3 lists several dozen criteria that are relevant to choosing the right program. Not all are equally significant, and it is undeniable that reputation is critically important. But it would be foolish to opt for a school ranked fourth by U.S. News, rather than one ranked sixth, solely because of these rankings—if the first had an unsuitable atmosphere, had few electives in the field of law you want to enter, or suffered from one of a number of other defects that may also be important to you. There is no precision to these rankings; the same publication may reverse the rankings of these same schools next year! The imprecision and variability of the rankings is one reason for being cautious in using them; another reason for caution is that one school will be able to offer you a program geared to your needs whereas another will not.
These concerns give rise to some guidelines for using rankings:
 
1. Look at as many rankings as possible and consider the consensus rather than any one ranking.
2. Consider even this consensus view as only an approximation of the appropriate tier for a school. Thus, a school ranked about tenth to fifteenth in various rankings should be regarded as a very fine school, to be taken very seriously, but whether it should be ranked in the top five or merely the top twenty is not determinable.
3. Because you should be looking for the best program to meet your specific subject and other needs, with an atmosphere in which you will thrive, the rankings have only a modest part to play in helping you to find this program. They have little to say about which school will provide the courses that will be most useful, the connections that will matter most for the job and region in which you wish to be employed, the academic and social environments at each school, and other key factors.
4. More important than the rankings will be the research you do concerning the details of specific programs, which is discussed in detail in Chapter 3.

THE RANKINGS

The seven rankings charted below cover a total of 33 out of the 191 law programs currently accredited by the American Bar Association. The starting point was to consider the top schools according to U.S. News, then see how those same schools fare from other perspectives. The various rankings are explained and discussed later in the chapter.
025
There is little agreement as to which school is the fourth best, or which is the nineteenth best. There is reasonable agreement, however, as to which of those schools are viewed as the elite. Considering four sets of rankings (median salary, peer assessment, LSAT score, and Supreme Court clerks), the following schools were ranked in the top five or ten at least three times:
026
Other schools were ranked in the top fifteen or twenty at least three times:
027
These rankings considered widely disparate data, yet a reasonably clear picture emerges when the whole set of rankings is considered. Eleven schools were routinely ranked in the top ten and another seven were ranked in the top twenty multiple times.

U.S. NEWS & WORLD REPORT

Methodology. The U.S. News & World Report magazine rates law schools each year. Its methodology is purported to be quite comprehensive: Each factor considered in the ranking is broken down by score for the reader to examine. These factors, which are divided into several subcategories, are: “quality assessment,” formerly termed “reputation” (counting for 40%); student selectivity (25%); placement success (20%); and faculty resources (15%).
Advantages of this approach. The virtues of this approach are clear. By measuring each school on a number of bases, U.S. News can claim to have achieved a depth and breadth that no other ranking has. Furthermore, the ratings that U.S. News’s methodology produces are reasonably stable over time, with few extreme jumps and falls in individual rankings year on year. This is presumably a reflection of reality, as it is rather unlikely that the quality of many law schools would change dramatically in a short period of time.
Limitations of this approach. The U.S. News ranking suffers from two problems that face any such survey. For one, innumerable factors that might be highly relevant to you are not included in this (or any other) ranking. Thus, there is no direct measure, for instance, of the actual quality of teaching or of faculty accessibility at each school.
For another, different applicants want different things in a school. For example, George may prefer a small school with extensive student-faculty interaction and plenty of environmental law courses. Lisa, on the other hand, may prefer a large school in which she can be relatively anonymous and have plenty of electives in intellectual property. Producing one ranking, based upon whatever weighting of these two factors is chosen, cannot do justice to the needs of both George and Lisa. (In point of fact, the U.S. News rankings are arbitrary in the weights assigned to the various factors.)
Some of U.S. News’s categories are clearly relevant to ranking schools but are hard to calculate in a sensible fashion. Other of its categories can be calculated fairly easily but fail to produce useful information.
Thus, school quality is highly relevant to the ranking of schools and to candidates’ decision-making. The problem is in measuring it. U.S. News uses two separate surveys—a poll of law school deans and professors (“peer assessment”), and a poll of judges and lawyers—and combines the responses. Each person polled is asked to rank (on a scale of 1 to 5) the reputation of each of the 190 or so schools. It is highly unlikely, however, that even a well-informed law school dean will know the operations of other schools so thoroughly that he or she can accurately rate many of them. This is likely to be more true of judges and lawyers, who are not even in the education business. The probable effect of this is that both groups might tend to overrate schools in their region, schools that are already famous, or those that make the biggest splash—or even those they themselves attended. U.S. News surveys only large firms, which gives a boost to schools with large graduate populations in New York City.
A potentially larger problem is that one critical group is somewhat overlooked: Employers are not asked about their hiring practices.
The selectivity measure depends heavily upon the median undergraduate grade-point average of incoming students. This statistic is, remarkably, unadjusted in any way. This assumes that a grade is a grade, regardless of circumstances. At a large number of colleges the median grade point is 3.3–3.6, whereas at the military academies it is below 3.0. The rigor of courses at a community college should hardly be compared with that of courses at CalTech, but for the purposes of U.S. News’s rankings, a GPA is a GPA no matter where it was earned. The difficulty of getting top grades in hard sciences is generally far greater than that encountered in the liberal arts, yet the rankings do not take this into account. By the same token, a student who took only introductory or low-level courses would be considered on equal footing with someone else who did advanced course work in physics, philosophy, and ancient Greek.
Consider, too, what this measurement completely ignores. Many law schools now accept a substantial number of transfer students, but their undergraduate grades are not considered in this measure. Neither are the graduate results for those who have gone beyond undergraduate studies prior to attending law school.
THE GPA MESS
The U.S. News treatment of undergraduate GPAs, described in the text, is roughly analogous to the following treatment of LSAT scores:
➤ Applicants from some schools (especially the least rigorous) are given a four-point bonus.
➤ Applicants (from all schools) taking the fewest credits and easiest courses get a three-point bonus.
➤ Students in some fields (arts, humanities, social sciences) receive a five-point bonus, whereas those in other fields (mathematics, hard sciences) receive a three-point penalty.
Combining the resulting LSAT scores would produce a complete muddle, of course, but no more so than currently exists with the GPA statistics.
The measure of so-called faculty resources (the expenditures per student for instruction, library, and student services) is relatively straightforward; the problem is that it is of extremely dubious value. Let’s consider two schools, Acme Law School and Garage College of Law. Acme charges $40,000 annual tuition whereas Garage charges $20,000. If Acme gives $20,000 of financial aid to each student whereas Garage gives no financial aid, students at both schools will pay precisely $20,000 each. Acme, however, will be considered to have “spent” money on its students and Garage will not, resulting in Acme being given a higher ranking in this category.
Some law schools are silly enough to let the overall university administration pay their utility, maintenance, and other bills. Other law schools make sure that they are “billed” by their universities for these expenses. The latter schools are considered to have “spent” more money on their students, thereby raising their rankings. Presumably some smart law school deans are considering getting their universities to charge them mega-rents for their buildings so as to bolster their U.S. News rankings.
The problem does not end with how to calculate what a law school really spends on its students. An even greater problem arises from the fact that schools are ranked according to how much they spend, not according to the value they get for their expenditures. For example, throughout the 1990s Japan spent a higher percentage of its national income on business investment than did any other developed economy, but spent virtually the whole decade floundering from recession to recession. This hardly provided other economies with an example to follow. Nonetheless, Japan would have been placed at the top of a U.S. News ranking rather than at the bottom as it surely deserved.
The placement success measurement suffers from both types of problems. It is based in part upon a highly arbitrary formula. U.S. News calculates the proportion of the latest class to graduate employed full- and part-time approximately nine months after graduation but includes one-quarter of those students whose employment status is unknown and excludes those who claim they are not looking for employment. This estimate is obviously flimsy, but it happens to have major consequences. Schools that place most of their graduates locally find it easier to track them, meaning they will have fewer graduates fall into the “unknown” category, thereby giving a boost to these schools that is not readily available to those schools that place their graduates across the world (and thus find it hard to keep such close track of them). Also, numerous schools have hired their graduates for make-work positions (“coffee, anyone?”) to boost their score on this measure.
The methodological difficulty in calculating an accurate placement success figure is dwarfed by another consideration: It is simply not a suitable statistic for distinguishing among the very top schools. This is admittedly a statistic that can be very helpful for applicants choosing between two lower-tier schools. If only 68 percent of Golden Gate University’s graduates are employed nine months after graduation but 89.6 percent of Chapman University’s are, potential applicants are likely to find this very important to their decision-making. On the other hand, if the relevant figures for Virginia and Northwestern are 99.1 percent and 99.6 percent, respectively, it is hard to imagine that anyone should choose Northwestern over Virginia on this basis.
Each of the other measures used by U.S. News also suffers from substantial problems similar to those discussed above. In addition, numerous schools have been severely economical with the truth (not to say that they have lied!) at various times. None of these criticisms mean that the rankings are valueless; instead, you should be careful not to place undue weight upon them in deciding to which schools you will apply.

MEDIAN SALARY

Methodology. U.S. News collects information about the private sector salaries accorded the most recent graduates of law schools. Thus, it provides the 25th and 75th percentiles (the mid-range) of such salaries. (It does not, however, use the information in its own rankings.) The data for the chart was compiled by taking the middle of these two figures. Thus, if a school’s 25th and 75th percentiles were $95,000 and $125,000, the median was considered to be $110,000.
Advantages of this approach. Many applicants to law school look to land high-paying jobs, so knowing where they’re to be found is of obvious interest. Insofar as the highest-paying jobs are associated with prestigious private firms, which can be choosey about where they recruit, they are also an indication of the quality and reputation of the schools’ graduates.
Limitations of this approach. Not everyone cares about landing a high-paying private firm job, so the question for some may be whether government agencies, nonprofits, and the like value schools in a similar fashion. In other words, does attending a school favored by Wall Street firms make it easier to get a plum job with the ACLU or Greenpeace? This data does not answer the question.
Also, there are obvious regional biases to salary data: They are higher in New York and Chicago than New Orleans or Billings. There is no cost of living adjustment to this data. Neither is there an adjustment to account for the fact that students who opt for a school in a major metropolitan area are both more accessible to recruiters from that area and more likely to wish to settle there, making it all the more sensible for recruiters to target them.

U.S. NEWS PEER ASSESSMENT

See the discussion under U.S. News & World Report.

LSAT SCORES

Methodology. The average of the 25th and 75th percentile LSAT scores reported by the law schools was the basis for this ranking.
Advantages of this approach. LSAT scores are widely used as predictors of law school performance and, at least implicitly, as measures of intelligence. Given that much student learning results from interaction with and observation of other students, sharing the classroom and hallway with smarter rather than less smart students provides a greater learning opportunity.
Limitations of this approach. LSAT scores probably reflect native ability more than willingness to work hard during law school. Similarly, they do not reflect knowledge and experience relevant to law school classes.

BERKELEY AND TEXAS MATCHING PROGRAMS

Methodology. For some years, the University of Texas and the University of California at Berkeley (Boalt Hall) Law Schools have had a financial-aid matching program in place. They have guaranteed admitted applicants that they (the law schools) would match the financial aid offers of peer law schools so that the applicants would not have to pay more to attend Texas or Berkeley than to attend a peer school. Both schools still have this program in effect, although Texas has recently ceased to name what it considers peer schools for this program. It prefers the flexibility of being able to match offers in some but not necessarily all cases. Nonetheless, the chart above provides what Berkeley and Texas (formerly) have deemed their peer schools. The N/A designation is used to show that each school itself is precluded from showing up in its own chart. (Texas, however, is given the benefit of the doubt: For purposes of calculating the top fifteen in the chart above, it is treated as though it made its own list.)
Advantages of this approach. Law schools are surely insiders when it comes to determining which schools are or are not their peers. Similarly, they are well aware of which other schools applicants consider attending.
Limitations of this approach. To some extent, the designation of peer schools may reflect which schools applicants consider rivals—schools they would attend if given a better financial package. Thus, a quality school overlooked by applicants might not make the list while a somewhat lesser school, favored by applicants, could.
Berkeley’s list fails to include three schools that might otherwise be expected to be on it: UCLA, Texas, and Northwestern. Each might be considered a special case. In the case of UCLA, an in-state rival, it would be inappropriate to use taxpayer dollars to fight for an applicant who would otherwise attend another California state school. Texas, on the other hand, has traditionally had very low tuition so it has not offered (or needed to offer) substantial financial aid to attract applicants. Northwestern, on the other hand, was not much of a rival when the program was instituted. Perhaps it (as well as Texas, now that its tuition has jumped) will be included in a future edition of the list.
Another problem is that the two schools do not describe how they determined which schools to include on their lists, making it hard to evaluate their criteria.

SUPREME COURT CLERKSHIPS

Methodology. The number of Supreme Court clerkships for the period 1994–2010 was totaled for each school. This figure was adjusted according to the number of students attending each school, to produce a per capita ranking.
Advantages of this approach. A Supreme Court clerkship is one of the most prestigious honors to be bestowed on a law student. Supreme Court justices can have essentially whomever they want as clerks, and they are presumably knowledgeable consumers of legal talent, so the law schools from which they choose clerks are presumably where the very top graduates are to be found.
Limitations of this approach. This measure obviously focuses on just a relative handful of graduates, which means that it may have limited applicability to those graduating some distance from the top of their classes. Another issue concerns the appropriate time period to consider. The data is very lumpy (i.e., the number of clerks taken in one year is small, so adding one clerk from a given school could alter the school’s rank substantially), which suggests that looking at only the last year or two would be inappropriate. On the other hand, the further back one goes the less relevant the data becomes. Also, it is apparent that the current clerks are highly influential in the selection process, thereby potentially adding a school-specific bias.
THE RANKINGS CONTROVERSY
Several years ago the deans of most of the leading (and also-ran) law schools in the country signed a vicious, and breathless, letter attacking the ranking of law schools. Because the rankings were not perfect, went their argument, no rankings should be published. (U.S. News was the intended target of this attack because of its predominance in this field.)
This was hysterical, of course, even in ways they did not intend. First of all, the tone of the letter was a caricature of all the worst qualities of lawyers. It was whining, outraged, and immoderate, the product of downtrodden victims. Second, it treated the users of rankings as small children who would never be able to extract value from a ranking without being misled to their detriment. (What an interesting view these deans have of their future students.) Third, the deans themselves often play up these same rankings whenever they are not about to be quoted in the press. One marvelous example: Stanford’s Dean, Paul Brest (a signatory of the notorious deans’ letter), upon his retirement, sent a valedictory letter to Stanford Law alums. His first substantive comment bragged about how well Stanford had recently done in the U.S. News rankings, while joking that the rankings were flawed insofar as they had failed to place it first.
Even though the U.S. News rankings are open to criticism, as this chapter amply demonstrates, the law schools are poorly situated to criticize them. After all, many of these same law schools admit a large part of their classes on the basis of just two numbers (candidates’ LSAT and GPA). U.S. News at least has the good grace to consider a dozen inputs in making its decisions.
A more reasonable response to the U.S. News rankings would be to encourage various publications that routinely rank business schools and other educational programs to rank law schools along lines favored by the schools. Having a full set of different rankings available in the marketplace would presumably provide a better-rounded view of the schools and lessen the impact of any one approach to rankings. Few leading figures at these schools have yet taken such a proactive view of how to respond. Instead, they remain locked in the complaint culture of American law.
In fact, there is an even more troubling aspect to the deans’ conduct. They treat the U.S. News rankings as though they are a major, avoidable tragedy. Unfortunately, they quite miss the point. Going to Michigan instead of Berkeley—or vice versa—due to a misplaced reliance on some rankings is highly unlikely to destroy someone’s life. What is tragic, however, is the dirty little secret the deans fail to acknowledge: So many people who enter law school end up desperately unhappy in the profession. Thus, the critical choice applicants make is not between two high-ranked law schools, but between attending law school and doing something else altogether.
The deans do not wage any protests about this, or write outraged letters to editors. Indeed, they do not even study the fate of their own graduates. How many longitudinal studies correlating graduates’ happiness with the number of years worked before law school, the nature of that work, the field that the graduates entered after law school, the nature of their employer, and so on have the deans commissioned?
The career services directors at the schools are very clear that a substantial minority of graduates are sorry that they entered law. Why are they not on the admissions committees at the law schools? Why do the applications not require, as business school applications do, that applicants demonstrate that they have explored their career options and thought seriously about them?
U.S. News may have a case to answer, but so do the deans.
ADMISSIONS DEANS DISCUSS THE “DEANS’ LETTER”
Although almost all deans signed the letter criticizing the U.S. News rankings, many of those deans tout their school’s success in rising up the rankings “food chain” in annual letters to alumni, when recruiting faculty candidates, and, in some cases, with brightly colored stickers slapped on the cover of the admissions bulletin. I suspect that at a great many law schools, the long-range plan uses the U.S. News ranking as the benchmark for success. Susan Palmer, Virginia
The reality is that they [the rankings] aren’t the only sources candidates use in making decisions. The deans’ letter was a bit paternalistic in assuming that candidates would use the rankings and no other information. Don Rebstock, Northwestern
I think it was an important step in trying to educate applicants about the rankings, but I don’t believe it’s had much impact as to the amount of weight applicants place on rankings. Rob Schwartz, UCLA

MORE RANKINGS

Obviously, the published rankings do not necessarily cover all of the issues and concerns you might have about law schools. This leaves you with room to make your own rankings, tailored to whichever criteria you deem to be most important. Following are a few of the “rankings” you might consider helpful.

DEPARTMENTAL RANKINGS

If you are farsighted enough to have decided what type of lawyer you aim to be, it would be wise to consider not only overall school rankings but departmental ones as well. U.S. News, for instance, ranks schools in eight specialties, ranging from dispute resolution to tax law. Besides consulting published departmental rankings, be sure to talk to employers in the particular field. Ask them to give a rough ranking of schools offering concentrations in that field.

JOINT-DEGREE RANKINGS

If you are considering getting a joint degree, pay attention to the ranking of the other program. Depending on your intended career path, the reputation of the nonlaw program may be the more important of the two.

JUDICIAL CLERKSHIPS

Many law school graduates want to get a judicial clerkship, some for the prestige, others for both the prestige and the learning opportunities. It is clear that Yale sends a much greater percentage of its class to clerkships than do other top schools:
028
029
As is usually the case, however, this data is not unambiguous. Is Yale more successful in garnering clerkship laurels for its graduates, or do its graduates disproportionately seek clerkships? Clerkships are a natural halfway house for those intending to litigate or to teach—or to avoid committing to a real job for another year or two. Those intending to be transactional lawyers, on the other hand, will less often seek a clerkship because they are likely to learn little that will be of direct relevance to doing deals. Yale certainly produces a lot of graduates who would like to do public interest litigation, work for the government in litigation-related fields, or teach law. They also have a lot of graduates who are still uncertain what they want to do in the long run, which is fitting for a school that prides itself on taking a more intellectual (and thus less pre-professional) approach to the study of law than do most of its counterparts. This attracts a lot of people who want their graduate study to resemble their undergraduate work—to be a continuation of a liberal arts education.

NUMBER OF RECRUITERS ON CAMPUS

It is a simple matter to divide the number of employers visiting the campus by the number of students graduating. The higher this number, of course, the more employment opportunities are likely to be on offer for the school’s students. This is certainly valuable as a very rough-and-ready guide. If one school has only one and a half employers per student whereas another school has ten employers visit for each graduating student, it is a reasonably safe bet that students at the latter school will have more employment offers from which to choose.
As with most such figures, however, this one should not be pushed too far. The difference between having eight employers and ten employers recruiting each student is hardly likely to be meaningful. After all, you can only work for one. As a means of comparing disparate schools, this too faces limitations. It is not clear whether a school with ten employers per student is twice as good as a school with five employers per student, or just marginally better. For instance, imagine that both you and your closest friend applied to the same ten law schools. You got into all of them and decided to attend Harvard; your friend was accepted only by Harvard (being denied by the other nine), and decided to attend Harvard. Did you do ten times as well as your friend in this situation, somewhat better, or only very marginally better?

NONLAW EMPLOYERS RECRUITING ON CAMPUS

If you think you might want to work outside of law, you probably should not attend law school. If you remain determined to go to law school, though, you should consider attending a school that has successfully placed graduates in the nonlegal field you are considering. Similarly, you should check to see whether these employers recruit from a given school. For someone interested in strategy consulting, for example, it would be sensible to see how many of the prestigious strategy consulting firms (Bain, Boston Consulting Group, McKinsey) recruit on campus; note, too, how many they hire. If you are interested in investment banking, check how many of the bulge bracket firms recruit there, as well as how many they hire—and repeat the exercise for whatever boutique firms or specialists are of interest to you. The same exercise can be performed for whatever field or fields interest you.
SHOULD DATA BE GIVEN IN ABSOLUTE OR RELATIVE TERMS?
One of the methodological issues for which there is no definitively “right” answer is whether to adjust data from absolute numbers to per capita—or proportional—figures. For example, should the number of Supreme Court clerks per school be given as a total or adjusted for the number of graduating students per school? Consider a hypothetical example, with only two schools to choose from. If Harvard had twenty clerks and Yale ten, but Harvard had four times the graduating class size of Yale, what would be the “correct” figures?
030
A case can be made for either measurement. If you want to attend the school with a higher percentage of future clerks, you would opt for Yale (assuming that past performance was an accurate indicator of future results). You might figure that more of your time would be spent around one or another of these paragons than in a school in which they were more dispersed across the student body. Or you might assume that this figure was an indicator of the school’s overall quality.
Under some circumstances, however, you might prefer Harvard. You might want to get to know the greatest number of future clerks so as to have the best network later in life, or because you planned to write an exposé of the Supreme Court and wanted the greatest number of sources. You might also believe that future clerks would be likely to identify one another and stick together, thereby giving you the greatest concentration of them at Harvard rather than Yale—presuming, of course, that socializing or intellectualizing with them was your primary goal for law school.
This is by no means the most intractable of methodological problems—that distinction may belong to the question of how different variables should be weighted to produce an overall ranking. But by realizing how a simple matter is open to differing views, you can understand just how complicated ranking schools really is.

MANAGING PARTNERS

The Law Firms’ Yellow Book collects data from some eight hundred large law firms. The law school from which each firm’s managing partner graduated is noted. The idea is that making it to the top of a large firm (and thus to the top of the legal profession) requires some combination of determination, managerial skill, interpersonal abilities, and perhaps a valuable network. Knowing which law schools these lawyers attended might suggest where such talents (and assets) are developed. The limitations of this approach, however, are clear. This is another measure that focuses on just a handful of graduates. In this case, nearly all of them graduated decades ago, making it all the harder to be sure that what benefited them will continue to benefit those who come some thirty or forty years later. In addition, the summary data is unadjusted for size of schools’ graduating classes.

OTHER POSSIBLE RANKINGS

The rankings discussed in the first part of this chapter use various means to arrive at their results. They could use any number of other approaches. Nothing stops you from ranking schools on the basis of what most interests you. For example, you might choose any one or more of the following:
• Percent men (or women, or minorities, or Asian-Americans, or foreigners)
• Percent of incoming students with more than five years’ experience (or over the age of thirty)
• Number of books used by other top schools produced by faculty of a given school (perhaps adjusted by number of faculty)
• Citation index: number of citations (in the leading journals) of articles written by a school’s faculty (perhaps adjusted by number of faculty)
• Percent of graduates entering a specific field (law teaching, public interest, environmental, tax, criminal, and so on)
• Number of courses in your chosen field (or percentage of courses offered in that field)
Each of these could be a valid measure for what you most desire. The first two measures, for instance, are means of determining whether you will have classmates who are most like you—or most unlike you, if you look to learn from those who least resemble you—whereas the next two measures can be used to assess the scholarliness of the faculty. There are innumerable other possible bases for rankings, depending upon what you seek from your law school education.

CONCLUSION

It is impossible for a ranking to assess (accurately or otherwise) the large number of concerns, requirements, and questions that every conscientious law school applicant has. As an approximate gauge of a school’s reputation, however, the rankings—taken together—can be useful indicators. Their rough-and-ready nature, however, suggests that they indicate the broad class to which a school belongs rather than the precise ranking it should enjoy.
Ultimately, whether the rankings determine reputation or measure it ceases to be the issue. The school considered “best”—by whatever standards—will generally attract the “best” students and “best” employers. Because you are bound to benefit as much from your peers as your professors, and are undoubtedly concerned primarily about your job opportunities, reputation is an extremely important factor to consider.
ADMISSIONS DEANS DISCUSS THE RANKINGS
WHAT IS WRONG WITH THE RANKINGS
 
We analyze undergraduate performance from a multivariate perspective. We consider such factors as grade inflation patterns (which vary significantly across the universe of undergraduate colleges), the rigor of the applicant’s course of study, and the selectivity of the college attended. Of the more than two thousand colleges in this country alone, there exists a remarkable range of selectivity (and resultant student quality). That not all colleges are equal in terms of their selectivity and student quality is widely known, but, for reasons I have never been able to understand, an educational reality ignored by many law schools as they undertake their own student selection. Similarly, U.S. News & World Report (which accords 40 percent of its selectivity to UGPA) fails to recognize these contextual realities in weighing median grade-point average. Jim Milligan, Columbia
Where I have a bone to pick with the rankings is when they insist that you should weight various factors in a certain way. The way a given ranking weights things may not line up with how any one individual would weight things. So I’d advise students to look at the data, figure out what’s important to them, and focus on those factors. Don’t just default to someone else’s judgment that employment upon graduation is worth 12.5 percent of a school’s total score, expenditure per student is worth 20 percent, and so on. Josh Rubenstein, Harvard
Ordinal rankings tend to magnify minute statistical differences among very similar law schools. I often tell applicants that the top schools look very similar statistically and what is important at this level is feel. Are you going to be happy attending School X over School Y? If so, why? If you understand the answer to that question, you have come a long way in selecting the top school that is right for you. Jason Trujillo, Virginia
There are a large number of problems with them. For instance, placement success is of critical importance to prospective law students, but the rankings simply report the percentage of students employed (which could include those pumping gas) instead of focusing on the nature of their employment. The vast majority of the top law schools report over 95 percent employed, so choosing a school on the basis of tiny differences in placement statistics makes little sense to me. Rob Schwartz, UCLA
U.S. News is probably wrong in failing to adjust acceptance rates for class size. The applicant pool for top schools is largely fixed; inevitably therefore larger programs accept a higher percentage of applicants. Jim Milligan, Columbia
 
WHAT IS RIGHT WITH THE RANKINGS
 
It’s a little too “holier than thou” to object that since we’re institutions of higher learning, it’s foolhardy, misleading, and sinful to rank us. We have consumers like any other business. It isn’t the worst thing possible. It is, for the most part, OK and mirrors the reality of law schools’ quality. Andy Cornblatt, Georgetown
The rankings perform a useful function by keeping schools from becoming complacent. They need to work hard to keep their rankings. Our deans didn’t sign the “deans’ letter” protesting the use of rankings because they are a reality of our society and because—if you look at the groupings within those rankings—they tend to be pretty accurate reflections of how the public views the schools, which is highly relevant to the decision of which school to attend. Don Rebstock, Northwestern It’s great that U.S. News and others collect so much information. More information and transparency are good because law school is such a big commitment that people should know what they’re getting into. Josh Rubenstein, Harvard
 
HOW YOU SHOULD USE THE RANKINGS
 
Most admissions folks find the rankings a bit unfortunate in that applicants take them too seriously. It’s very difficult to quantify and portray a large number of relevant factors and combine them into a single measure of quality. William Hoye, Duke
To a certain extent, the law school world created the rankings. For a long time we all claimed to be all things to all people. There was thus a need for some reasonably objective approach to give applicants some sense of the relative quality of different programs. The problem is that applicants rely too much on the rankings, which should be no more than a starting point for them. Elizabeth Rosselot, Boston College
The rankings are useful in giving you a sense of the general reputation of a school. However, they tend to give an impression of precision that misleads more than it helps. They make what should be a complex and subtle decision appear very simple. Rick Geiger, Cornell
Once you’re accepted by several schools, you shouldn’t decide which one to attend based upon a one- or two-point difference in their rankings. Of course, if the schools are ranked in entirely different tiers, the disparity in their reputations may make a difference to you. Megan A. Barnett, Yale
Use them with caution. Make sure you understand how they are calculated before determining how much reliance to place on them. Rob Schwartz, UCLA
You can use the rankings as a starting point: They give you an idea of what schools will be within your reach. But I’d strongly discourage you from selecting a school based solely on a rank simply because the rankings can change rapidly without much really changing at the law school in any given year. Chloe T. Reid, USC
The rankings (from all of the various sources) are a good way to begin your law school search but a disastrous way to determine where you will ultimately attend law school. The biggest problem with any third-party ranking source is the tendency to overstate its importance and adopt ranking criteria that are not your own. Prospective applicants should do their own legwork and evaluate respective law schools based on their own most important values and considerations. Monica Ingram, Texas
The rankings are a good tool with which to start. They offer a good way to assess the lay of the land. You can get a general idea of the reputation of a school—how it is viewed by others. But what makes a great law school great isn’t the rankings. You have to determine what’s most important to you. If it’s the faculty, look at the faculty rankings for student-to-faculty ratio as a start. Then delve further: Examine the backgrounds and scholarship of faculty in fields of interest to you. Then find out how they relate to students, how well they teach, how accessible they are. If it’s placement that’s important to you, start with the rankings for placement statistics. Then delve further. Find out where students are placed geographically, what types of practice they enter, and find out how the career placement staff assists its students. Derek Meeker, Penn
No one looks at U.S. News and thinks their rankings are insane. In fact, in a broad-brush way, they’re quite helpful. However, you shouldn’t delegate your decision to U.S. News. After all, whatever set of factors they choose—and the weights they assign them—are unlikely to be the set (or weightings) you’d choose. Sarah C. Zearfoss, Michigan
 
HOW THE RANKINGS AFFECT WHAT LAW SCHOOLS DO
 
The rankings have changed the admissions profession, perhaps irrevocably. What used to be a private, school-centered, professional practice has become a highly public one. Because the rankings so heavily focus on median LSAT scores, acceptance percentages, and the like, many law schools, regrettably, pay inordinate attention to quantifiable factors. At many American law schools, the role of the admissions professional has changed, sad to say, from one of counselor and student advocate to that of a salesperson or agent. Jim Milligan, Columbia
There is certainly an incentive to shrink the size of the 1L class (which does count in U.S. News) and increase the number of transfers (which do not count in U.S. News). We have not bitten. In fact, our transfer class has gone from twenty to fifteen to ten over the past three years. But there certainly is an incentive to do the opposite. Jason Trujillo, Virginia
The schools play to the rankings, of course, which then hurts the applicants. Schools now manage their yields, wait-listing qualified students or denying them if it seems that they probably would not attend the school. It means that schools now make the decisions for the applicants rather than letting the applicants themselves decide. For example, I’ve heard that some Midwestern schools seldom accept Ivy League candidates since they tend not to go to the Midwest rather than the coasts. Because of the way the rankings are constructed, it’s hard to reconcile what is best for the institution with what is best for the applicant. Elizabeth Rosselot, Boston College
They give us measurable goals, and provide useful pieces of information to prospective students; however, the factors upon which the various rankings are based are not necessarily the only factors by which we would like to be judged. Anne Richard, George Washington