"No Rankings, Not Now, Not Ever" is the rallying cry for the October Statement, and over a hundred philosophers have signed. They think it would be better not to have rankings of philosophy departments. For all I say here, they might be right. The trouble is that there's no way to sustain an absence of rankings when the internet exists, so "no rankings" is not actually a live option.

Rankings are very easy to produce and distribute. With a few philosophers and a bottle of tequila, you can make up some idiosyncratic departmental rankings in an evening. With the internet, you can make it all instantly available to everyone. The funny thing is, I'd actually be more interested in your idiosyncratic tequila-driven rankings than the opinions of internet journalists with only a passing interest in philosophy, posting rankings promoted by the prominence of their media organizations. Even if I disagree with you, your rankings were made by philosophers who read lots of stuff while earning PhDs, and whose opinions I'm interested in engaging with. But in a battle for the attention of undergraduates from universities with few research-active philosophers (and worse, Deans!) the media and its promotional machinery can win.


As far as I know, there weren't any influential and public departmental rankings before the internet. Maybe some October Statement signers think there's a way to return to those days. I'm doubtful, because a rankings vacuum only seems possible in a pre-internet era. Back then, the only way to get everyone to see a set of departmental rankings was to send them physical copies of the rankings at considerable expense. For something to be distributed in this fashion, it had to be of very broad interest (like national election news or major league sports), or of interest to wealthy people who would pay for subscriptions (like an investment newsletter). Philosophy department rankings are neither.

Soon after the internet became a thing, rankings were on it. Distribution costs are now zero. Any philosopher can use the internet to promote an idiosyncratic agenda, and any website can spit its own rankings onto the internet and use them to sell ads. Heaven help you if your administration falls under the spell of rankings made by the media, perhaps because of the machinations of some consultant whose only graduate degree is an MBA. This will unleash influences on hiring and tenure decisions that no philosopher would find congenial.

October Statement signers may have some clever ideas about how to maintain a rankings vacuum, or systematically delegitimize all the rankings that will pop up. They should tell us! Many of them want a ranking-free informational database, which would in fact be very nice, but lots of rankings would spring up alongside it. I don't know how they could sustainably prevent bad rankings from influencing prospective graduate students and university administrators, who are interested in comparatively evaluating departments. The point of preventing the philosophical community from coming up with its own rankings would then be lost.

If the October Statement successfully prevented us from coming up with our own ranking system in the short term, philosophers might still build a centralized ranking system in the end. If QS or some other media-run set of rankings gained so much power that philosophers needed a defense against them, our professional associations might have to develop an officially approved ranking system that actually fit philosophers' needs, and make it rise above all the bad rankings with their stamp of approval. The time when philosophers were ruled by nonphilosophers' rankings would be remembered as a grim age. Instead of having such an age, we should build and support the best system of rankings we can now.

With editorship of the PGR turning over, we're in a moment of flexibility where new ideas could change how we rank departments. Brian Weatherson has some. Carolyn Jennings is trying to collect data on job placement, which is an end product of graduate education that we all care about. If you want to only have specialty area rankings, or more directly incorporate job placement data in the ranking process, or democratically elect evaluators, this is the time to speak up! Many people who signed the October Statement probably have smart things to say about what would make a ranking system better (hi Jenny Saul!). We'd benefit if they'd express them in our emerging discipline-wide conversation.

The only options are good rankings and bad rankings. No rankings isn't a live option, any more than having Spider-Man rank all the departments. So if you signed the October Statement because you think no rankings would be best, I hope you'll still talk to us about how to build good rankings.

Posted in

36 responses to “A rankings vacuum is unsustainable in the internet era”

  1. Roberta L. Millstein Avatar

    Neil, do PGR-less disciplines have the proliferation of departmental rankings that you describe? I somehow doubt it, but I am willing to be corrected.
    In some ways a proliferation of clearly idiosyncratic rankings would be better than what we have now. The status of, e.g., Sinhababu’s Ranking would be clearer than that of one that purports to represent the discipline. I for one would be interested to see how my fellow philosophers of biology would rank schools in that area, but that’s more curiousity than anything else. I still think, as I have said elsewhere, that a database with qualitative information would be far more useful for prospective graduate students than any ranking.
    Furthermore, I think there is something philosophically troubling with thinking that ranking is problematic (for those that do think this) and yet participating only in order to make a somewhat less problematic ranking. As Mitchell Aboulafia eloquently said, “the question is whether we as philosophers are not only aware of institutional biases, but whether we are comfortable actively promoting them, entrenching them, lending our good name to furthering them. We should be doing everything in our power to level the playing field when it comes to hiring in Philosophy. And this means diminishing as much as possible the halo effect, not supporting a system that creates halos and then—all too often—trumpets them to the world” (see http://upnight.com/2014/10/07/the-halo-culture-taking-the-rankings-challenge/).

    Like

  2. Anonymous Student Avatar
    Anonymous Student

    I don’t find this argument that the internet exists, therefore we must all work together to develop a new de facto ranking system to be the least bit compelling. Obviously, anyone can put rankings up on the internet. Similarly, anyone else is free to critique them. It’s not a given that one would emerge victorious with a clear consensus of approval within the profession (and that is what it takes before deans and others in charge of hiring decisions take notice). In fact, judging by recent controversy, it seems unlikely that a consensus would emerge at all. If a ratings system is impotent without consensus and you are opposed to rankings in general, here is one thing you can do: ignore alarmist calls that demand we must now all work together to develop consensus on how to rank philosophy departments, in order to avert some sort of impending crisis. Why not put forth a positive argument on the merits of rankings, if that’s how you feel? I find this type of alarmist argument rather unconvincing.

    Like

  3. Simon Evnine Avatar
    Simon Evnine

    Just to put my 2c down with the previous two commenters, the threat isn’t from the mere existence of rankings. The danger arises only when one (or more than one) ranking has enough status. We’ve been told several times in all the recent discussions that other disciplines don’t have anything like a PGR; if those claims are true, it’s obviously possible for philosophy not to have one. (It would, incidentally, be nice to have some reliable info about these other disciplines. Our discussions about rankings in philosophy have often proceeded as if there were no cases for comparison.)

    Like

  4. Enzo Rossi Avatar

    I have no settled view on this yet. But I’m curious: what exactly are the pressures on departments due to US News rankings? (Genuine question.) I’ve a philosophy background but I’m now in political science. The US News rankings seem to matter a lot to my American colleagues, even though they know the methodology is at best questionable. Whatever ranking is the most widely accepted one seems to crystallise into a self-fulfilling prophecy. There might be something positive in having a more random ranking play that role: it keeps the discipline relatively united against a front of detested oracular external evaluators, rather than creating a small disciplinary in-group who gets to evaluate everyone, police boundaries, and so on.

    Like

  5. David Wallace Avatar
    David Wallace

    Put ” department rankings world” into Google. (I’ve tried this for History, Philosophy, English and Classics). Scroll through the first few dozen entries. The first few are usually some externally run rankings service (most commonly the QS rankings; for Philosophy, PGR comes 5th). The next few dozen are mostly the home pages of various departments touting their position in these rankings. In Philosophy that’s often (not always) the PGR, but departments in other subjects seem to be doing it with high frequency too.
    I wouldn’t say that’s remotely conclusive, but I think it’s interesting.
    (Incidentally, I also get a high percentage of UK institutions. I don’t know whether that’s because metrics like that carry more weight in the UK, but it might just be because Google notices my UK IP address and aims to please!)

    Like

  6. David Wallace Avatar
    David Wallace

    urgh, sorry. That should have been “[subject] department rankings world”. I used angle brackets and it got mistaken for HTTP.

    Like

  7. Geoff Pynn Avatar
    Geoff Pynn

    I think that the specialty rankings are (in theory) very useful for prospective students, which is who the PGR is supposed to be for anyway. My preference would be to do away with the overall rankings, improve the specialty rankings, and add an interactive tool for creating personalized overall rankings on the basis of the specialty data. So, e.g., you could come up with your very own “M&E” ranking by selecting all of the areas you think count as M&E, and the tool would give you a ranking based on overall strength in the selected areas. There are challenges: we’d need to come up with a better system for assigning evaluators to various areas, and a more transparent and democratic method for figuring out which areas to rank. I don’t pretend that such a system would be non-ideological or unproblematic, but I think it would be much less so than what we have now, and more useful to prospective students.

    Like

  8. Christopher Morris Avatar
    Christopher Morris

    As I argued on another site recently, decent rankings (e.g., the Leiter surveys of the reputation of faculty at research institutions) are now necessary for departments in their dealings with the administrations of large research universities. At my own institution the alternative to the Leiter rankings would have been the National Research Council. These rankings are widely known to be wretched. But institutions will frequently use bad data rather than no data (e.g., course evaluations). Most of the discussants of the controversies over the PGR have focused on other things. But one should not overlook the usefulness of the Leiter rankings for chairs dealing with data-obsessed administrations. “No rankings” means really crappy rankings. Thank goodness US News does not rank philosophy depts, thank goodness we don’t have a Nobel Prize, and thank Leiter for his rankings.

    Like

  9. Tom Polger Avatar

    Neil, I don’t think I accept your argument for the inevitability of rankings of philosophy departments. But the claim that you need is much weaker: “no rankings” is not a de facto option. That I think is correct, and is the crux of your issue with the October Statement, viz., that the idealism of the “no rankings” movement fails to engage with the situation on the ground. Let me explain.
    I just finished a term as the Director of Graduate Studies (DGS) for my department. When the DGS meets with the Dean of the Graduate School to talk about our P.D. program or investing in philosophy, or about the processes that will dictate distribution of resources at the university, the administrators expect me to be responsive to various bits of data. (Or, “data” if you prefer.) The data include application, admission, graduation, and enrollment trends in my department over time. They sometimes include comparisons to peer institutions identified by my department or my someone else. And they include rankings. Which rankings? None produced by philosopher(s)! They don’t know about those unless we tell them. What they know about are two: (1) the NRC rankings, and (2) rankings produced by a private for-profit company called Academic Analytics. Both of these are highly problematic sources, though there is nothing to be gained by going into the details of why. The point is: Philosophy departments are, in fact, ranked. And whether philosophy departments get ranked is not up to us as philosophers.
    You might say that the rankings I mentioned are irrelevant because current and prospective philosophy Ph.D. students generally don’t know about them–and in the case of Academic Analytics, have no access to the rankings. But I do not accept the fantasy story that the purpose of departmental rankings is to help prospective students pick departments to which to apply–that pretense has gone on far too long. That rankings guide student behavior is certainly true; but I seriously doubt that it is the raison d’etre for any rankings. Prospective graduate students would probably be well served, as Roberta says, by large amounts of various sorts of data and increased transparency. But that will not replace rankings because the use of rankings in applicant decision making is simply not why the Dean of the Graduate School cares about rankings. And administrators are not merely using rankings because they stumbled upon them; they went out and hired someone to make the rankings at great expense.
    That this situation is deplorable is beside the point. I am not saying that we should, as a profession or as individuals, endorse any rankings. If we are department chairs or graduate directors, then it is probably not a live option to pretend that there are no rankings. There are rankings, and it is not up to us. It is not possible to opt out.
    Please do not tell me that I am acquiescing to a system that should be resisted. I resist it plenty; and I urge you to do so as well. One way that I resisted it is by collecting detailed data about my program that I can use to give accurate information to current and prospective students, and to administrators. But that is only a partial answer to what we who dislike rankings should do when we lived in a world of rankings.
    I share the October Statement wish that there were no rankings. But Neil is pressing us to deal with a hard question: What should we who dislike rankings do when rankings are a fact of life at contemporary universities?

    Like

  10. Tom Avatar
    Tom

    “But one should not overlook the usefulness of the Leiter rankings for chairs dealing with data-obsessed administrations.”
    But that coin has two sides. The other side is that for a department systematically disadvantaged by the Leiter rankings because they focus on areas against which the PGR is systematically biased (basically – with some oversimplification – for any department that does not focus on so-called ‘core’ analytic M/E stuff), its situation will be made worse by the PGR when dealing with administrators who rely on the PGR.

    Like

  11. p Avatar
    p

    They do not have to then used those rankings, but their own pluralist rankings, no? Or no rankings, since, as many people seem to think, we do not need them. In any case, the idea very idea of rankings means that some departments will not be ranked or ranked highly. Not all departments can be ranked in top 10. Moreover, it might no be so bad – there are way too many grad programs in philosophy (if your nr.departments=nr.jobs/2, then you have a problem).

    Like

  12. Christopher Hitchcock Avatar

    “As far as I know, there weren’t any influential and public departmental rankings before the internet.”
    In fact, there were such rankings. When I was applying to grad programs in philosophy in 1985, everyone know about the “Gourman Report”. (The “Gourmet Report” is actually a pun on the name “Gourman”: Gourman = Gourmand, i.e. glutton; in contrast with Gourmet, one who appreciates fine food.) This was the work of one Dr. Jack Gourman, who was a prof. of political science, and was somewhere in the upper administration of California State University system. He ranked undergraduate programs, professional programs, and graduate programs. His rankings were published by the Princeton Review. I could visit my university’s office of career services (which also had information on graduate and professional programs) and read a copy of the report.
    Gourman was not a philosopher, and he released rankings with no description of methodology. Yet for some reason people still paid attention to them.
    This all supports your contention that there will always be rankings, and people will always pay attention to them.

    Like

  13. Neil Sinhababu Avatar

    Sorry for the delay in responding, folks. I had some Nousday duties to attend to 🙂
    Hi Roberta (and AS and Simon)! When I looked up English and History, the US News rankings were first, and the QS rankings were second. This accords with what David Wallace is seeing. Apparently in History, US News does reputational surveys among department heads, which is kind of Leiter-like. I don’t know how seriously those rankings are taken by people. Enzo’s comment suggests that in political science, US News is taken somewhat seriously.
    Geoff, I think the specialty rankings are probably more accurate, since people know more about their own areas. I’ve wondered if there’s some way to just have people make specialty evaluations, then weight those by the number of people in each specialty, and compute overall rankings off of that. This would take APA-like surveying powers, but maybe it’ll happen. I’d like it because…
    …of the points Christopher Morris and Tom Polger are making. There is a deep hunger for ordinal rankings among administrators who hold great power over us. If we don’t make our own rankings and aggressively market these rankings to administrators using the approval of our professional associations, they will make their own ordinal rankings, or choose whichever of the many rankings suit their biases. (Or, if they’re lazy, ignore rankings and just use their biases.) I want philosophers to have more power over administrators.
    My ideal system probably involves having professional associations set up some kind of representative democracy of evaluators, which ends up feeding into ordinal rankings in some way. (Representative democracy would prevent area-slanted rankings of the sort Tom and p are discussing.) The rankings are aggressively promoted by the APA/AAP/UKpeople to administrators. The idea is to replace the role of administrator-whim in our lives with philosopher-driven representative democracy.

    Like

  14. Christopher Gauker Avatar
    Christopher Gauker

    “I do not accept the fantasy story that the purpose of departmental rankings is to help prospective graduate students pick departments.” In all of the on-line discussion of the PGR, I don’t think I have seen anybody question that before. But I agree with Tom on this. Maybe some of the people who do the rankings are motivated by a desire to help prospective graduate students, but I doubt that many are. It’s harder to say what does motivate them. I’d hazard a guess that it has something to do with the spirit of competition, pride in having the privilege of judging others and self-promotion. My own motives lay somewhere in there, on those few occasions when I was asked to serve as evaluator.

    Like

  15. try102030 Avatar
    try102030

    ‘The perfect is the enemy of the good’ seems to sum up much of what I see going on around this issue. I am glad this post tries to address this attitude.
    Unless a professor will say, “They’re all equal” or “I won’t answer you”, then advice about graduate schools in philosophy will be given to philosophy graduate students by philosophers. So, rankings exist, even if only in the heads of philosophers. (Which, lots of people seem not to not understand, is what the PGR is: an opinion poll. Rather, everyone seems to understand, but fears that adminstrators and students aren’t clever enough to understand.)
    So the issue here seems to be: a philosophy professor’s opinions on graduate schools are well and good and by all means help prospective students who ask for advice. But for the love of Zeus, don’t collect a bunch and put them on paper!
    This is the only issue I’ve ever come across where people think more data are worse data.

    Like

  16. TBC Avatar
    TBC

    Wait, just to be clear: is signing the October statement consistent or inconsistent with supporting CDJ’s ranking work? Because I like the October Statement but I also like CDJ’s work. Is there a tension?

    Like

  17. Enzo Rossi Avatar

    @Neil: “I want philosophers to have more power over administrators.”
    But what philosophers? Philosophers who are biased against the type of philosophy done in certain departments, say? We don’t want administrators to tell department heads that they need to hire in the areas favoured by the PGR board.

    Like

  18. Christopher Gauker Avatar
    Christopher Gauker

    Neil, your proposal for combining the speciality rankings strikes me as the simple and obvious method. More precisely, to generate a value between 1 and 5, we weight each specialty by the number of people in that specialty divided by the total number of people we are looking at. The weights sum to one only if we assume that each person has just one specialty, and so we would need some correction to avoid that assumption. I made this proposal some years ago when I was still putting my two cents in at Leiter’s blog (I’ve been avoiding it for years), but nobody else chimed in.

    Like

  19. cautious philosopher Avatar
    cautious philosopher

    It is important that people see why the APA should not get involved in rankings. The APA purports to represent everyone in the discipline: faculty at research institutions with graduate programs; faculty at private universities; faculty at religiously affiliated colleges; faculty at state colleges; faculty at two year colleges; and more and more, adjunct instructors throughout the system. Rankings are going to be very divisive and alienate one group or another that the APA purports to represent.
    I am not against rankings. They are happening any way. I rank departments when students ask me where they should apply for graduate school, for example. But the APA should not be in that business.
    And to think we can aggregate specialty rankings into an overall ordinal ranking that will serve the profession is naive. The most effective way to deal with the predicament is to recognize that there will be numerous rankings by different measures created by different groups of people, and to discuss such matters with students, administrators, and others who will use these rankings.
    The US News rankings, despite their flaws, are popular with parents who are paying for education, so every school in the US that cares to court parents (is there one that does not?) has to have an eye on those rankings.

    Like

  20. Michael Kremer Avatar
    Michael Kremer

    @cautious philosopher #19.
    Yea and amen. Thank you.

    Like

  21. Neil Sinhababu Avatar

    Thanks to try102030 and Christophers Hitchcock and Gauker for filling out more data and details of views I share.
    TBC, I see no such tension. A proposal of integrating CDJ’s work into the rankings in some way, which I was thinking about, might introduce such a tension. But that was just my idea and not anyone else’s.
    cautious philosopher and Enzo: If we conduct the evaluations by democratic means that incorporate the opinion of the whole philosophical community, will that solve the “which philosophers” problem? The answer will then be “all philosophers”.

    Like

  22. p Avatar
    p

    Mark Schroeder had a post in which he argued that one of the reasons why he thinks PGR is less accurate in specialty than in the overall rankings is that as it happens, many people doing the specialty rankings are not quite “in” (i.e., at the top of the research in the specialty), but, due to various reasons, more like outsiders at the moment. Now I do not happen to agree with him in this particular instance but I think his point would gain much more plausibility if we were to include all philosophers, the whole community. I do not know where to draw the lines, obviously, but BL has been following one in which people who are regarded as top notch recommend others and so on (at least I think that that’s the way it goes). This seems to me sensible since, otherwise, I am not sure how we would get a group whose assessment would be coherent and useful. There will, of course, be all kinds of misgivings from people who feel excluded by this, but that seems inevitable.

    Like

  23. cautious philosopher Avatar
    cautious philosopher

    You seem to miss an important part of quality rankings. They are not determined democratically, and for good reason.
    We do not try to give everyone a chance to referee articles or grant proposals. We invite experts to do it. It is their opinion that we value, and rightly so.
    We can raise objections about who counts as an expert or authority with respect to any particular ranking, but a meaningful ranking will need to draw on experts’ opinions (or quantitative measures).

    Like

  24. Neil Sinhababu Avatar

    Two solutions, cp:
    (1) Everyone (maybe every APA member with a PhD) does specialty rankings in their area of expertise, and then we aggregate the results, weighting by the number of people in each specialty.
    (2) Everyone (see above) votes on people they regard as qualified experts, to do a lot of ranking work. Those democratically elected people become the rankers.
    In each case, we’re ranked by experts, either in their specialty (presumably if you have a PhD you’re an expert at something) or by people we think are experts.

    Like

  25. p Avatar
    p

    I do not think that having a PhD is sufficient to being an “expert” in a specialty. It’s a necessary condition for becoming employed. When I got my PhD I felt I was an expert in about 2% (barely) of my field. I had some idea about who the relevant people are the specialty, but only within the much more limited field in which my diss. was written I felt confident about being able to judge the quality of work of others (and that is a subjective feeling on my part). I have now broadened my perspective much more, but it took another 10 years at least to get here, all the while I was actively and hard trying to be on the top of things, both in my own work and in keeping up with literature, and being at institutions that push research hard and leave me with a lot of time and relatively minimal teaching duties. Still, even now I am not sure others would, however, feel about me that way too (judging from some peer-reviews, maybe not) and I can name 20 people just right off the bat whose judgment I trust more.

    Like

  26. Margaret Atherton Avatar
    Margaret Atherton

    There seems to be a view floating around that the overall rankings as provided by the PGR are pernicious but the specialty rankings are useful which I am finding a little odd. I find the specialty rankings helpful when say a student comes to me and says, I want to study philosophy of religion in graduate school, then I can go and find schools that have people doing philosophy of religion. Then I can go to the department website to find out who that person is because for most sub fields in philosophy departments, there is no more than one person. And I feel uncomfortable the notion that, even if what you are doing is putting departments into categories, that you can actually rank individual s. there are a lot of people whose work I admire in my own field, but I think it would be madness to try to produce a rank ordering of them.

    Like

  27. A Avatar
    A

    This is exactly right; I’ll say it before and I’ll say it again — most people in other fields would find the idea of academics in a field ranking each other’s reputation in poor taste.

    Like

  28. Christopher Gauker Avatar
    Christopher Gauker

    Prof. Atherton, I take your point.

    Like

  29. Gordon Avatar
    Gordon

    One point which isn’t getting made often enough, I think, is that the presence of an official or otherwise dominant ranking system (a position enjoyed by the PGR) will tend both to reflect and perpetuate the disciplinary status quo. If “top n” programs mostly do core M&E, then aspiring departments and their deans will have massive incentives to be more like those programs, which is to say more M&E-based (and, within that, M&E that more resembles that which is highly ranked). Similarly, if the rankings system dismisses entire categories of philosophy as crappy, then that act will damage those programs and the people in them. I complain about the boundary-work that PGR does every so often, and that’s the basic claim I’m making (and notice that it does not depend on any sort of deliberate attack by the person(s) running the ranking system).
    The PGR is particularly difficult in this context because it does claim to rank “continental” programs. So there’s a specific version of “continental” work that gets encoded as “real.” Leiter defends his continental list as reflecting some sort of mainstream opinion, and of course it does reflect the mainstream of the people he interviews. But that names the problem, not the solution: the importation of substantive views about the nature of the discipline into the supposedly neutral process of ranking them.
    I happen to think that any ranking system is going to import a substantive view of the discipline; to the extent that such a ranking system becomes dominant, it will also tend to create a hegemonic normative view of the discipline. That’s what’s most wrong with a semi-official or de facto official ranking. I suspect it’s part of why folks find it easier to support the specialty rankings than the overall ones: the overall ones make an implicit normative claim about what philosophy as a whole should be, and then present that as a consensus view.

    Like

  30. p Avatar
    p

    Can you explain why that is so bad? One of the things that is so hard to get students to see is that philosophy is not an exercise in self-expression, that it is not a free-for-all discipline, in which anything counts as “doing philosophy”. We try to teach them that there are in fact standards and norms and that they can be evaluated by those standards and norms. This is in fact part of what makes philosophy an academic discipline. There is this view, perhaps implicit in your post, that somehow there are no such rules or norms and nobody can tell anybody what counts as philosophy – getting a PhD or contributing to a blog on philosophy makes one a philosopher. I find this picture of philosophy deeply troubling and it worries me that it is a path to exactly nowhere. Plato’s worries about democracy might not be the best when it comes to political systems, but I think they do have something to tell us…

    Like

  31. David Wallace Avatar
    David Wallace

    And yet, they do it all the time. It’s a completely standard question when you’re asked to write for tenure review or post-tenure promotion or grant assessment: how do you rank this person compared to the best people in their field at their career stage. You’re often* asked to make explicit comparisons.
    * well, to be fair, I haven’t done so many of these to really reliably say what they’re like. But I’ve done a reasonable number and they’re usually like this.

    Like

  32. Gordon Avatar
    Gordon

    I think there’s a difference between saying “anything goes” and the PGR. There’s all sorts of hierarchies and institutional forces that get in the way of anything-goes at the professional level – the need to find a dissertation advisor, reviewers for journals, and so forth. I take it that any professionalized discipline, especially one that tends to take its own history as part of its self-understanding, will thus tend to be conservative in that sense of the term. That may be a good or a bad thing. But it seems to me that kind of conservatism is not the same as the conservatism generated by a hegemonic, reputation-based ranking system. The latter form magnifies whatever problems the former one generates by giving a centralized imprimatur to whatever biases are built into the professional ecology. In so doing, it further entrenches them, and further marginalizes those who might do otherwise.
    I should probably be clear that I’m speaking specifically about the PGR’s marginalization of “SPEP” continental thought. Whatever one thinks of the quality of the work in that tradition, it remains that a fairly sizable number of people work in it, publish successfully in it, go to the big conference and so on. It’s excluding an entire professional ecology. Now, SPEP is fully capable of being conservative too, and if anything the kind of continental work done there has tended historically to be even more demanding than analytic work when it comes to locating oneself in a historical tradition.
    The other piece of the puzzle is of course that lots of people who do important work in feminism, race theory, queer theory, disability studies and so forth (this is Butler’s embarrassed etc.) find a supportive environment in SPEP departments. So excluding those departments can have the effect of marginalizing just the sorts of people that everyone has a commitment to including. That does not have to be intentional to be a problem.
    Plato thought democracy was the corruption of oligarchy. My concern with the PGR or comparable ranking system is that it promotes oligarchy.

    Like

  33. p Avatar
    p

    What is the evidence that it “promotes hegemony” and what exactly does it mean? People like to say these things as if the mere words would somehow do the work of evidence, but I am not sure they do. I saw the Bharath Vallabha paper, but it mostly look to me like someone complaining about the fact that people at the top departments come, generally, from the very top departments. This seems to me as nothing either surprising or alarming. In fact, if anything, it seems to me that it is something one would expect.
    There is a real issue about PGR’s undervaluation of certain fields insofar as the departmental prominence in those fields tends not to be reflected in a way in which people say they feel it should be in the overall rankings. I am from one of such fields, but I actually do not see this as a problem that is somehow endangering the whole project (this concerns history of philosophy, feminist philosophy, etc.). I think this consideration has a lot of merit and I expect it to be addressed as PGR goes on.
    Concerning SPEP – I think that there is something odd about SPEP quite in general. That is, I find it extremely odd that there are actually SPEP departments which somehow employ (besides some exceptions) people form other SPEP departments. It looks to me on a par with there being “German Idealism” or “Marxist” or “Society for Exact Philosophy” departments which would somehow only employ people from other Marxist or German Idealism or SEP departments. So that’s the first point. Insofar as that is so, maybe they should have a separate ranking for people interested in that kind of work.
    P.S., Personally, I do not find most of SPEP philosophy more historically situated than other, supposedly analytic (history of) philosophy. I know that that is somehow the claim, but having read substantial amount of the work published by SUNY and other venues, my impression was exactly the opposite – more like the counter-part of the kind of analytic history of philosophy from 60’s and 70’s which was fond of formalization of arguments (something nobody does anymore), this was trying to use vaguely postmodernist/phenomenological vocabulary to muse around historical texts as if that was somehow illuminating.

    Like

  34. try102030 Avatar
    try102030

    This is a genuine question not meant to insult anyone involved in all this. Has anyone consulted a professional statistician, research methodologist or survey methodologist in all this?
    While philosophers – like many people – are sometimes smart and have good things to say about interpreting evidence, the three subjects I mention above do grant PhDs in, and collectively form a vast chunk of, the science of collecting, analysing, interpreting and presenting data…
    I’m all for people solving their own problems but why risk reinventing the wheel – or missing out on it? The Greeks’ work on conic sections was no hindrance to the work on planetary motion despite the fact they weren’t studying conic sections with this in mind.
    It is not the case that only academics in philosophy departments have the smarts to recognise the problems involved in rankings or how to address those problems.

    Like

  35. try102030 Avatar
    try102030

    “What is the evidence that it “[ X ]” and what exactly does it mean? People like to say these things as if the mere words would somehow do the work of evidence, but I am not sure they do.”
    Agreed. Not specifically at Gordon but in general around this issue. Actually, all issues!

    Like

  36. Gordon Avatar
    Gordon

    This is going to be my last post on this, because I can see that the conversation isn’t going to go anywhere. But here’s few final points:
    1. Evidence. there’s an accumulating set of posts over at Daily Nous that show the contortions that departments will engage in to try to advance their position in the PGR, either because it’s how to get positions out of deans, or whatever. That constitutes at least prima facie evidence in support of the proposition that PGR has demonstrable effects on hiring practices. It’s the PGR’s semi-official status that makes this problem more acute than other kinds of rankings. I think the burden of proof ought to be on somebody who claims that a semi-official set of rankings for a discipline is not more influential than the presence of several, competing rankings systems, since that claim is utterly counter-intuitive.
    2. Leiter viciously attacks what he perceives to be SPEP departments, people who work in them, and figures that those people and departments study. If he likes who they study (Nietzsche, for example), he attacks the work those people do. I assume everybody knows this by now. I wasn’t born yesterday, and I know that his views – and far more dogmatic – are well-represented in highly ranked PGR departments. His own citations to social science research (or Cass Sunstein’s deployment of it, anyway) about group polarization suggest that a crowd of SPEP-hostile evaluators will make themselves more SPEP-hostile over time. The other half of Sunstein’s argument here is about social cascades, where those who lack sufficient information to make an informed, individual judgment about something (e.g., the folks tasked with coming up with overall rankings) will default to what they take to be the consensus of others.
    3. Therefore, reliance on the PGR can be expected to damage so-called SPEP departments and the individuals who come from them. Is this inference reasonable? Well, there’s certainly supporting evidence and the logic is coherent. So simply dismissing it won’t do.
    At that point, the real question is whether the PGR ought to decide what the discipline is. Since we can all apparently agree that some sub-disciplines are undervalued by PGR, then we all ought also to be able to agree that the PGR is not an infallible source for what philosophy is or should be. If that isn’t enough, then the confessions of SPEP evaluators that they really have no idea what they’re doing when the fill out overall rankings ought to suffice. The fact that “SPEP” persists, despite efforts to destroy it (e.g., the “odd” fact that SPEP-affiliated folks can get jobs) says that something is amiss, and that interest in that sort of work is resilient (aside: the argument here, I am aware, works better against the overall rankings, than it does the specializations).
    Finally, my claim about the historical nature of much of SPEP philosophy is nothing controversial. It’s got two parts. The first is that the primary figures you tend to find studied in SPEP – Heidegger, Nietzsche, Foucault, Deleuze, Hegel, Derrida, etc. – present themselves not just as historians, but as historically situated w/r/t the history of philosophy. So (part 2) contemporary SPEP scholarship tends to encourage both the study of these figures, as well as producing contemporary work that tries to situate itself historically.

    Like

Leave a comment