Some years ago, I authored and circulated an open letter to Brian Leiter expressing concern about the influence the Philosophical Gourmet Report was having both upon students who were selecting graduate schools and upon the profession more generally. As a result of my move from Harvard to Brown,1 the website where the open letter and accompanying material had been posted ceased to exist.2 I thought about moving the old site to a new location, but it had by then been almost four years since the open letter was sent. It seemed inappropriate to re-post a somewhat out-of-date website, so I didn't re-post it. But then people who had been encouraging their students to look at it for another perspective started writing me and asking what happened to it, so I figured I'd better do what I'd long been meaning to do and write some kind of up-to-date remarks on the Report. Anyone who would like to read the original criticisms may find them on the WayBack Machine.

The Report has changed in the intervening years,3 in most ways for the better. Many people had expressed concern, for example, that something with as much influence as the Philosophical Gourmet Report ought not to be controlled by one individual. Leiter remains in charge, of course, but a formal Advisory Board is now in place. Unfortunately, the Board is unrepresentative of the field, but that is some progress, nonetheless. The scores are finally normalized. Although, as Leiter himself notes, normalization introduces new biases, since not everyone ranks every department, that's progress again. And perhaps most significantly, Leiter no longer compiles the rankings within individual areas on his own but now includes area rankings in the survey. That is definitely progress.

Despite these changes, however, there are still serious problems with the Philosophical Gourmet Report. For example:

  1. Just as it has for years, the Report ranks "graduate programs" on the basis of a single factor—the quality of the faculty's research—whose correlation with the quality of a student's graduate education is, though surely positive, arguably small. Factors that are arguably more significant, such as how devoted the faculty are to graduate teaching and whether they are any good at it, are ignored.4
  2. There is no reason even to believe that the survey accurately measures the quality of the research being done by a given institution's faculty. It asks respondents to rate entire departments, although individual respondents cannot be expected to have first-hand knowledge of the work of more than a few people in any department. An individual's ranking of a "department" must therefore either reflect the work of only a few of its members or, more likely, be based, to a significant extent, upon reputation, not the quality of current research. What the gourmet report measures directly is thus faculty reputation.
  3. As one prominent philosopher once mentioned to me, one would certainly hope that there was some significant positive correlation between someone's reputation and the quality of h'er research. But it is not obvious how strong the correlation is, especially with younger people or people who work in more technical (or simply less popular) areas. More importantly, to defend the Philosophical Gourmet Report on such grounds is to commit a simple statistical fallacy: the product of significant positive correlations—that of the quality of a program with the quality of research, and that of the latter with reputation—need not be a significant positive correlation.
  4. That said, there is presumably some positive correlation between the Gourmet Report's rankings and the quality of a given department's graduate program. But it remains an open question how well the Report's rankings track anything that should matter to a potential graduate student. What attempts have been made to correlate the Report's rankings with placement results, say, have been inconclusive, but they offer no support for anything but a very weak correlation. And, in purely practical terms, aren't placement results what matter to prospective graduate students?
    To mention placement results is to expose oneself to the ridicule on the ground that one fails to recognize that the Report is supposed to reflect current strength, whereas placement records reflect past strength. But that is a very silly criticism, the obvious reply being that past placement records may nonetheless be a very good (although, of course, imperfect) predictor of future placement success, possibly a much better predictor than the results of a survey. Moreover, the research people are being asked to rank was also done in the past. How well the quality of past research reflects future placement success would seem a very open question.5
  5. The Report has a built-in bias towards large departments.6 For some time, Leiter himself tried to correct for this bias by awarding smaller departments extra points.7 That practice, which was absurd on its face, has since stopped, so the bias towards large departments is directly reflected in the rankings.
  6. For the 2004–06 survey, "More than half those surveyed were philosophers who had filled out the surveys in previous years; the remainder were nominated by members of the Advisory Board, who picked research-active faculty in their fields." The risks of relying upon a self-selecting group should be obvious.8
Anyone with any experience conducting serious studies that rely upon such surveys—and yes, I've talked to several such people—would know how dangerous, even potentially crippling, such flaws are. I'm still as puzzled as I always have been why such glaring methodological flaws are tolerated by people—Leiter, by his own account, anyway, and members of the Advisory Board—who claim to have only the best interests of undergraduates at heart. Frankly, the oft-trumpted fact that some students are so hungry for information that they would take to rejoicing when even a scrap of crust fell from the table doesn't much impress me. Most defenses of the Report come down to "It's better than nothing". Well, maybe it is, and maybe it isn't. But either one cares about providing reliable information or one does not, and the apparent lack of concern about the sorts of problems just mentioned makes me wonder.

Another, and in some ways more serious, worry concerns the influence the Report has upon the profession as a whole. Partly as a result of the factors just mentioned, the overall rankings in the Report are biased towards certain areas of philosophy at the expense of others. The most famous such bias is that against continental philosophy. I don't much care for that style of philosophy myself, but it isn't transparently obvious why Leiter's oft-expressed and very intense distaste for much of what goes on in certain "continental" departments should be permitted to surface so strongly in the rankings.9 Other biases are less obvious but every bit as real. It is well understood in the profession that hiring someone pretty good who works in philosophy of mind will have more influence on a department's overall ranking than will hiring someone much better who works on logic, let alone on ancient or medieval philosophy. I have been told that this fact has actually influenced hiring decisions—told, that is, by people who were present at meetings where such decisions were made. I'm sure most supporters of the Report would be as concerned as I am about such events. But what's to be done? Should departments simply not consider how their hiring decisions might affect their ranking? That isn't very realistic, especially when administrators have taken to confronting departments with their reduced rankings and demanding action, which is something I've personally seen happen (not at Harvard) and have been told about many other times. There is only one solution, and that is to put an end to the disproportionate influence a department's strength in so-called "core" areas of metaphysics and epistemology has upon its overall ranking. Or, better yet, to produce a set of rankings that, at the very least, doesn't have the sorts of flaws that one knows, in advance, will lead to some such biases.

In closing, let me repeat something I've said elsewhere. I don't actually think the Philosophical Gourmet Report is completely useless. As I've said several times, I think there is a small but positive correlation between the Report's rankings and the quality of graduate programs. The Report therefore can be useful to students who are considering where to apply. The decision where to apply is sufficiently coarse-grained that "small but positive" will be helpful, so long as the usual warnings are heeded. But, in my opinion, it would be a serious mistake to give the Report's rankings any credence when making a decision that is more fine-grained, such as which graduate school to attend. Perhaps the correlation is good enough that it would rarely be wise to choose a school around 30 over one around 5. But that's not usually the sort of decision with which students struggle.


1It has often been speculated that my criticisms of the Report were motivated by a desire to defend the honor of the Harvard philosophy department against perceived slights. So long as I was at Harvard, I was limited by my obligations to that department in how I could respond to this criticism. Now that I'm not at Harvard, I should like to take the opportunity to set the record straight.
It is indeed true that I have long regarded the Report's various rankings of Harvard as misleading, but I was never out to "defend" Harvard. (The smoking gun Leiter claimed was found—an article in the Harvard newspaper featuring a quote from Gisela Striker and a remark from me to the effect that, yes, the Report has influence—is laughably unimpressive.) In some respects, yes, I think Harvard has sometimes been badly under-ranked. For example, Harvard was producing some very strong epistemologists—Adam Leite and Tom Kelly are two—during a period when it did not even appear on the epistemology rankings. My first contact with Leiter, in fact, consisted of a letter in which I bemoaned this fact and argued that the presence of Bob Nozick and Jim Pryor, with ample support available elsewhere, ought to have garnered us at least a mention. (Harvard was mentioned in the next year's rankings, as it happens.) Bob, I suppose, was overlooked because he hadn't worked in the area for some time, and Jim was overlooked because he was young. That's precisely the sort of combination that gets one overlooked, and students interested in epistemology may have been discouraged from attending Harvard by its absence from the list, to what might have been their loss. (With Bob's untimely death and Jim's move to Princeton, such students might have been better off elsewhere, in the end, but that's the sort of thing that can happen at any department.)
In other respects, I think Harvard has been over-ranked, in large part because it has benefited from the very "halo effect" that some supporters of the Report see it as counteracting. The idea that presenting lists of faculty without naming the department counteracts the "halo effect" is simply silly. Departments with illustrious histories will benefit from them in the rankings whether they are explicitly named or not. Anyone who doesn't know which department is Harvard, which Princeton, which Yale, which Rutgers, which Stanford, which UCLA, and which Columbia, has no business filling out the survey. Perhaps "not includ[ing] the name of the university with the faculty lists [is] beneficial in forcing evaluators to respond to the current faculty" (PGR). That is, perhaps it has some effect, but I know of no reason to believe it has much of one. To the contrary, the much discussed "staying power" of traditionally strong departments even after significant deaths, retirements, and departures is strong evidence that it has little. But lest I be accused next of sour grapes, I should probably say no more.

2Leiter apparently takes some satisfaction in the fact that the link to the original site has been removed from the Harvard philosophy department's website. I removed it, before handing the site over to its new maintainer, since I knew the link was about to go dead. (Try visiting emerson.fas.harvard.edu. The machine that used to have that URL now lives at frege.brown.edu.)

3Leiter says he didn't make any changes in response to criticisms of the Report. I'll leave it to others to speculate about whether those criticisms might have had some effect via other routes, such as via members of the Advisory Board who thought some of the criticisms had some merit but who expressed them more gently than I did. (If I had it to do over, I'd be a lot more gentle.)

4Brian Weatherson has some very nice things to say about why a department like MIT might be under-ranked. (It was Brian who first seems to have realized that it was the Report's treatment of MIT that had gotten under my skin and driven me to act.)

5To see any significance whatsoever in the facts that there is some positive correlation between the Report's rankings and research quality and some positive correlation between research quality and quality of graduate education, one would again have to make a simple statistical fallacy.

6I should note that there are some who deny there is any such bias, but I don't see how one could seriously defend such a position, given the survey's methodology. The fact that there are others who think that, even if there is such a bias, it's not objectionable is enough to make one start worrying about self-knowledge.

7In the 2001 Report, for example, small departments got an extra tenth.

8Kieran Healy's analysis of the Report's data uncovered an unusually high degree of consensus among those responding to the survey. As he had no access to other data, he was of course unable to determine to what extent that consensus was an artifact of how the respondents were selected. The issue was raised in the discussion that followed, however, and interested parties will find it makes good reading.

9It does so, of course, because it influences who is asked to complete the survey, what departments are represented, and so on and so forth. For some discussion, see John Hartmann's comments on Leiter's treatment of continental programs.