by Ed Kazarian
 
In the course of a discussion on Facebook begun by a colleague’s thoughtful and nuanced reflection on the kind of rankings we might want to have in the discipline, I was moved to offer an objection to the idea of ranking departments at all. I think it stands outside that particular discussion, so I’d like to reproduce it here.
 
I have, basically, two reasons for thinking that the practice of ranking departments is unwise and likely to be harmful:  
1) Rankings devalue the work of an awful lot of folks and, perhaps more importantly, provide the various agents of forces within the institution that may be hostile to the discipline (and to the humanities more broadly) a ready excuse to claim that most philosophy departments at US universities (or at least at research universities) aren’t worth the investment they require to maintain. Given the way we have seen some such people abuse metrics of any sort (no matter how questionable said metrics might be), I find it difficult to understand why we insist on producing one ourselves.
 
2) It should be evident by now that the problem of determining acceptable ranking criteria in a pluralistic discipline is tremendously fraught and has proven to be very resistant to a solution that is broadly acceptable. 
Instead of rankings, I think we should be moving towards a model where we collect and maintain as much up to date information about the various programs out there as possible. The discussion I mentioned above had already produced some really wonderful thoughts on what such a ‘database’ of programs might contain, and how it would benefit the various constituencies in the philosophical community. Indeed, as Justin Weinberg notes in one of his early posts at Daily Nouspart of what has been very valuable about Brian Leiter’s effort is that it has facilitated the broad circulation of key information about the profession that had previously been difficult to access for many people, including prospective students. Surely that sort of transparency is something we want, no matter what else happens going forward, to preserve and enhance—and which the various responsible parties in the profession should be working hard to foster. 
Posted in ,

11 responses to “Maybe the Best Rankings Are No Rankings At All”

  1. p Avatar
    p

    I am not sure about this. In the “ancient” times before PGR, the “power and prestige” was way more concentrated to but a handful of departments. It is one of the big results of the PGR that there is now a much more pluralistic world of philosophy & that we are aware of a lot more departments were excellent work is being done. True, PGR does have a certain “bias”, as it were, but that bias is actually very much mainstream and it is hard to see how something like this could be done without any bias at all. True, there are other problem with it as well. But it is a rather invaluable source of information for aspiring graduate students – I, for one, would have been completely lost without it those years ago when, coming from Europe, I was applying for grad programs. Many of my friends – non-philosophers – who did this without having such a guide – ended up making big mistakes.
    I, for one, remain unpersuaded by CDJ rankings for a variety reasons – not least because I think that data crunching is not particularly useful in assessing philosophical quality and because I would have been better advised by PGR than by CDJ’s rankings if I were a grad student (at least as these currently are).
    One problem I think PGR has is the overall rankings which I think should be done somehow automatically as a result of specialty rankings rather than by evaluators who presumably can’t really be in a position to evaluate the whole department. Not sure exactly how to do so, but it is noteworthy that the departments that appear often and high in specialty rankings tend to score high overall (so the results would probably be similar).

    Like

  2. Joshua A. Miller Avatar
    Joshua A. Miller

    I’m very sympathetic to this rejection of metrics, but I suspect it will fail.
    I think “no rankings” is like “no state.” The real task is to prevent upstart rankings and stationary bandits from filling the vacuum. People often forget the Leiter created his rankings as a response to a worse set of rankings, the Gourman report, which seemed like it might just be a recapitulation of the NRC rankings, didn’t have a published methodology, and was run by a political scientist. That’s what you get if you you opt for “no rankings.”

    Like

  3. Ed Kazarian Avatar

    Hi Joshua,
    That’s a useful historical reminder. But I think two things are worth noting in response. 1) Other disciplines, as far as I can tell, don’t have a similar institution to PGR. So the idea that such a thing must arise seems empirically false. 2) In some sense, a crappy set of rankings produced by someone outside the discipline without any real internal legitimacy would be an improvement given my worries in point 1 of the OP about the use that admin-types with a chip on their shoulder might put ‘our’ rankings to. I’d love to be able to say ‘that’s not ours and nobody within the field has anything to do with it.’

    Like

  4. Gregory Pappas Avatar
    Gregory Pappas

    I will share what I wrote to Leiter about Rankings (in general) on Mon 1/15/2007 6:42 PM
    To: Brian Leiter
    Subject: rankings
    Dear Dr. Lieter,
    I read your recent blog-reaction about the NY times article.
    I have no personal reason to defend Bruce Wilshire but I think there is an interpretation of his comments that is worth considering.
    Bruce Wilshire wrote:
    “We are a society that is great for rankings, horse races, polls,” said Bruce W. Wilshire, a philosophy professor at the college for 37 years. “I’m not sure what anything like this does for us. I’m sure some people think it is nice, but it seems like in philosophy, we should not be emphasizing this sort of thing.”
    It is not very clear to me what Wilsshire is trying to say, but here is what I think he could have meant. In any case, it is what I believe.
    RANKING (in all dimensions of life) is somehow a deep-seated HABIT in American society. That is something that many that come from some other culture have noticed. For sure, it is something that is hardly ever questioned.
    I concede that ranking can (in some situations) be a good “tool” but I have seen it so many times (especially in education) to be nothing but a VICE.
    I would say the same thing about the overemphasis in education on outcome-assesment and grades. I have become (perhaps as a reaction) anti-ranking, period. I refuse to play the ranking game.
    I wish I could have posted this in your Blog but was not able.
    Sincerely Yours,
    Gregory Fernando Pappas

    Like

  5. Jonathan Avatar
    Jonathan

    Ed,
    In response to your other disciplines comment, in English the US News and WR rankings have the place of the Gourmet Report, and they 1) have a very large impact and 2) are very distorted by the halo effect. It is a non-trivial bonus of the GR that it gets rid of such halo-ing.
    Jonathan

    Like

  6. Shen-yi Liao Avatar

    (2) seems like an excellent objection against omnibus rankings. I am not even sure PGR is intended to be that, though it certainly often gets used as such–arguably contrary to its stated aim and methodology.
    However, (2) does not seem like a good objection against highly-restricted, explicitly-stated, uni-dimensional rankings, e.g. a ranking on the likelihood of programs’ placements of graduates into tenure-track jobs within three years. My own view is that we should have a proliferation of uni-dimensional rankings that allow prospective students to mix and match according to their preferences. It’s also hard to see how such rankings would devalue people’s works. A proliferation of such rankings would also prevent people from mistakenly thinking that any one is supposed to be a general ranking of departments.

    Like

  7. David Wallace Avatar
    David Wallace

    Here’s my pessimistic prediction: not necessarily likely, but distinctly plausible:
    – Brian Leiter will stop running PGR;
    – people who want an alternative will discover how enormously time-consuming things like this are in practice, if they are to be done consistently well enough to build up a stable reputation;
    – so no new ranking or rankings appear to replace the PGR;
    – but people’s high- minded ideal of managing without a ranking run aground on the rocks of practical need;
    – instead, the 2011 PGR, preserved in amber and supplemented by ad hoc impressionistic thoughts about what has changed since then, lingers on in undead (and metaphor-mixing) fashion as our discipline’s de facto standard for departmental comparisons.

    Like

  8. dowen Avatar

    I’d like to agree that the main problem is ‘overall rankings’ so here is a thought:
    A) keep specialist rankings (and given the controversy over continental philosophy in the form that Brian dislikes do two continental rankings)
    B) take a plurality of aggregation procedures and produce a plurality of rankings (part from anything else it would be interesting to see how much stability or variation this produced)

    Like

  9. Ed Kazarian Avatar

    Hi David,
    I agree that your scenario would be a really unfortunate outcome—and not only because of my objections to PGR as it stands. Anything that fails to reflect the developments of the field but still has the imprimatur of authority is inherently problematic.
    I hope, however, that if we engage the question of whether we want rankings or, borrowing a formulation from John Protevi, ratings or descriptions (which may or may not coincide with what I’ve been calling information) in a more substantive way, we might end up with something more like a positive decision to stop trying to produce an authoritative ranking but rather to proceed differently.
    This might well require the decision on the part of the APA (and other organizations) to issue some sort of policy statement saying that ‘we do not endorse any global ranking of departments’–which would help to shut down an administrative attempts to use the outdated rankings against us. But again, if we actually did the work of developing a consensus around a new direction, even if that direction were away from any sort of rankings, such a thing would seem to be within reach.

    Like

  10. Ed Kazarian Avatar

    More generally, on the question of specialist or uni-dimensional rankings, I’m pretty sure that if the information on which such rankings could be based were collected and made widely available (as I propose in my last paragraph above) the process of making those would become much easier.
    I’d really like to democratize this as much as possible. Ideally, I’d like ‘ranking’ to be something prospective students do for themselves, rather than something we suggest to them.

    Like

Leave a comment