By Roberta Millstein

John Protevi, founder and emeritus member of New APPS, has posted an "October Statement."  By signing, one states one's opposition to the ranking of philosophy programs, whether in the form of the current PGR or in some other revised form.  The statement contains links to those who have offered reasons for taking such a position.

Protevi seems to have found a second statement to be necessary because he thinks that the September Statement implies that ranking systems confer a (net) benefit on the profession.  I don't think that it implies any such thing, and in a comment over at the Feminist Philosophers blog, Daniel Elstein nicely sums up why:

I guess what we should try to remember is that it’s really hard to write a statement that pleases everyone. People who support (PGR-style) rankings and people who oppose (PGR-style) rankings can (and should) agree that it is worse if Leiter is PGR editor than if he isn’t. The phraseology in the September Statement that seems to irritate ranking opponents is clearly there to reassure the ranking supporters that signing on is compatible with supporting (PGR-style) rankings. Ranking opponents should recognise that it is a good thing if all those who oppose bullying (including ranking supporters) can sign a unified statement, and so interpret the relevant parts of the statement charitably. The problematic sentence could be read: “With a different leadership structure, the benefits [that some attribute to] the guide might be achieved without detriment to our colleague.” That’s true, right? And it’s all that the authors will have intended.

That being said, I do understand why some might share Protevi's interpretation and for that reason not feel comfortable signing the September statement.  I would encourage those who feel similarly to sign the October Statement, while also pointing out that it is consistent to sign both statements (as I have done).

Posted in ,

35 responses to “Against ranking”

  1. Nathan Jun Avatar
    Nathan Jun

    Bravo, John!!!!

    Like

  2. John Protevi Avatar
    John Protevi

    Thanks, Roberta, I don’t think it’s inconsistent to sign both, and neither do quite a few others. But if you do, then I think the October Statement enlarges rather than splits the opposition to the PGR.

    Like

  3. Craig Callender Avatar

    While I understand that PGR has had nontrivial effects on the profession, both good and bad, I don’t really understand the rallying around a “no ranking!” position. That’s putting one’s head in the sand. There is no question we will be ranked. There are plenty of rankings, e.g.:
    http://chronicle.com/article/NRC-Rankings-Overview-/124753/
    Students at schools without “in the know” advisors, like where I went, will use these rankings. So will administrators. The question therefore seems to me whether we take control over the rankings within the discipline or have them imposed from outside. I prefer that we do it. Even if we do it badly, it will still be better than what US News and World Reports cooks up.
    It’s consistent with this that we come up with something very different from PGR. In fact, one can imagine something without a univocal overall ranking. Take a look at the NRC rankings above. While badly done in various ways, note that you can click on various measurement desiderata and it reorganizes the table. Similarly, one can imagine a PGR — or something — that organizes schools according to placement, to reputation, to reputation in subject X, to citation number, etc, etc. Then a school would not be the n’th school, which is reasonably meaningless in many respects. Instead it might be 7th best in citations, 13th best reputation, 3rd best in placement, 16th best in ethics, and so on. That might provide a useful snapshot of things.
    But I digress. Given that rankings exist and will exist, they will be used. The question is then only whether we want to produce something better in-house.

    Like

  4. Eric Winsberg Avatar
    Eric Winsberg

    I agree with everything Craig said and would add one more point: not only will “out-of-house” rankings exist no matter what, but if we as a profession don’t organize around doing something positive, someone “in house” will do rankings we will not like. We already have an existence proof of this, and if you read Leiter’s blog today, its clear his preferred plan is to produce the 2014 PGR and then wait and hope the movement against it dies down. The only movement that can be strong enough to push away something we don’t like is to replace it with something we like better; here Craig and others have had just the right idea: a web page that can rate departments on a variety of criteria depending on what you click on.

    Like

  5. Carolyn Dicey Jennings Avatar

    “a web page that can rate departments on a variety of criteria depending on what you click on.”
    This is the sort of thing that I would like to see.

    Like

  6. Roberta Millstein Avatar

    John, I agree with your second point, that the October Statement enlarges rather than splits the opposition to the PGR, although I would have prefereed if people could accept the interpretation that Daniel Elstein provided and signed the September Statement. Then again, I am happy to see the issue of rankings get further discussion, so that is all to the good.
    However, I am not sure that I understand your first point. If it were true that one thought that ranking systems confer a (net) benefit on the profession, it would certainly be quite odd for one to say that they were against ranking. One would have to believe that we ought not to do what is best for the profession.

    Like

  7. Roberta L. Millstein Avatar

    I agree that information presented without a univocal overall ranking would be much preferable to what we have now. As you suggest, such overall rankings are reasonably meaningless in many respects, if for no other reason than students (and administrators) may value some desiderata more than others. However, I find reptutational rankings problematic in a variety of ways, most notably, that they will almost certain incorporate biases and worse, entrench biases. I also think that many desiderata are not really quantifiable, e.g., what sort of research is being done at university X (what type of approach and what subjects?) Do graduate students feel supported by their advisors and by the program more generally (e.g., with good placement procedures)? How do we rank placement — how do we compare a 2/2 placement to a 4/4 one (especially given that someone might actually prefer a teaching oriented job), how do we take into account those who choose not to apply for philosophy positions? And how do we do rankings even within subfields, when there are different approaches and different criteria for what counts as good work? So yes, we need to provide our own alternative — with that much, I agree. But very little of it, if any, should involve rankings. More qualitative data would actually be more useful.

    Like

  8. p Avatar
    p

    I think this is a terrible idea. It will be back to hearsay, old boys club at the ivy leagues, and provincialization of all but a handful of state school or, alternatively, pseudo-scientific and for philosophy useless measurements of productivity and such.

    Like

  9. Roberta L. Millstein Avatar

    I still think it would be good to have a central clearinghouse for criteria that would be relevant for prospectives and administrators, and the presence of such a clearinghouse would pressure schools to provide the relevant data. The clearinghouse could also double-check the data. So no, it wouldn’t have to be back to the bad old days.

    Like

  10. TBC Avatar
    TBC

    “a web page that can rate departments on a variety of criteria depending on what you click on.”
    Wouldn’t that just reproduce the problem you’re trying to solve? You effectively presuppose a global ranking of departments if you define a ranking (or a function providing such a ranking) for an arbitrary combination of criteria.
    You could get an overall ranking, like the kind Leiter provides and that you decry, simply by selecting all the criteria. It wouldn’t be an accurate ranking, you might respond, because it wouldn’t be weighted in an accurate way, so people wouldn’t take it seriously. But I take it that critics of the PGR deny that its overall ranking weighted in an accurate way (if such a way is even possible). But that hasn’t stopped people from taking the PGR seriously.
    I suppose you could actively prevent people from selecting all the criteria, but that strikes me as a little ridiculous.

    Like

  11. p Avatar
    p

    Who would be the clearinghouse? I think what often escapes attention is the fact that Leiter ranks departments on the basis of reputation for the use of prospective graduate students to offer them a weighted summary of opinions of people other than their own advisors (if they have any). This is something that no undergrad has access too. And it is also something that has some actual (when taken with a grain of salt) relevance – the idea that what makes somebody a good philosopher is not so far from what makes somebody a good artist – the judgment of the reputed peers. All the placement data and such – those people can find out on their own and judge them on the basis of their own needs, desires, and so on. Concerning the biases – yes, that is I think obvious to anybody, but so what? PGR is not a total authority and nobody thinks it is and every student I talked is aware of it. I am not sure what the “entrenching” of biases talk is. Is that a factual statement based on observing the development of the rankings vis-a-vis the movements of the faculty and publications, or is it just a suspicion? Departments moved up and down quite often and the fact that some that were on top went down should indicate that the entrenching cannot be that serious.

    Like

  12. John Protevi Avatar

    Hi Roberta, in an exchange with magicalersatz at FP, she pointed out that the SS simply says “benefits,” so someone could say the costs outweigh the benefits and still sign it. I hesitated before signing it because as I put it, I didn’t see net benefits, but the “net” is an addition, and if you take that away, the inconsistency goes away too. So I ended up signing the SS too.
    As for the inevitability of rankings, I’m not sure. I think evaluation is inevitable, but that could take the form of coarse-grained ratings. In any case, I thought there was shared sentiment for the OS or no rankings position so I wanted to provide a place for people to affirm that position.

    Like

  13. ck Avatar

    I agree with you in principle, John (et. al.), but isn’t it idealistic to pretend that rankings can be wished (or even argued) away? We can’t just banish philosophy rankings, and we shouldn’t pretend that we are in a position to play president with the profession in that way. A profession should be seen as a site for negotiation, not for decision.
    When the Leiter-led-PGR ranking dies, the desire for rankings that the PGR has already produced will remain, and someone somewhere will produce a new ranking (in a way people are already working on it aren’t they?). (By the way, this is basic Deleuze on productive desire in Anti-Oed isn’t it?).
    If I am right that an alternative ranking effort of some kind is highly probable (and highly probably already underway), we should all work to insure that this is as fair, inclusive, and unbiased as possible. Condemning rankings outright is just not an effective way of doing that.
    Also, p makes a really smart point that if there were no published rankings, people would still desire them so much anyway that they would immediately create and trade informal rankings of the ‘old boy network’ variety, and then those would become entrenched and powerful, just like they were in the ‘old days’. In fact, every member of the profession should realize upon reflection that already have views about which departments are relatively strong and which weak (in given fields in which we work). One useful thing about a public and social ranking system is it offers a check on our own subjective and individual evaluations.

    Like

  14. Roberta L. Millstein Avatar

    John, thanks for the clarification. I understand your position now.

    Like

  15. Roberta L. Millstein Avatar

    I think there are different possibilities for who could be the clearinghouse, and its a point that we ought to discuss as a community. Alan Richardson has one proposal here:
    http://dailynous.com/2014/10/02/open-letter-to-professional-philosophical-associations-guest-post-by-alan-richardson/
    Another alternative might be to have an organization devoted to the task, with professional philosophers (definition TBD) to vote on the composition of the governing board.
    I’m sure there are other possibilities worth considering.
    As for where the entrenching is, I think it can be seen most clearly with the areas of philosophy. The PGR has entrenched LEMM areas at the expense of other areas in a way that was not the case prior to the PGR. To be clear, I am not saying that this was deliberate on Leiter’s part (in fact, my understanding is that he himself sees it as a problem with the PGR, although I could be wrong about that), but it is something that happened, and is pretty obvious to anyone who has been paying attention over the history of the PGR. That has had pernicious side effects on the numbers of underrepresented groups in philosophy, given that members of these groups are not distributed equally across areas (see Carolyn Jenning’s post, http://www.newappsblog.com/2014/04/the-gourmet-ranking-and-women-philosophers.html ). And while I think there has been some movement in the rankings, I wouldn’t say that the amount of movement has been particularly significant. Plus there are many departments which are simply not ranked because the PGR has decided that they should not be ranked.

    Like

  16. Margaret Atherton Avatar
    Margaret Atherton

    I am concerned about the assumption that either we will have PGR-type rankings or fall back into the bad old days of uninformed students. Students can use PGR-rankings in very uninformed, even lazy minded ways. There is quite a lot of “I’m going to apply to the top ten Leiter schools and call it a day” mind set floating around. We don’t HAVE to have the sort of ranking the PGR provides and I would rather see much more nuanced sets of information available that would encourage students to take a more active attitude toward the schools they are going to apply to and what they will study when they get there.

    Like

  17. Mark Lance Avatar
    Mark Lance

    I’m not convinced by Craig or P. First, do we actually have evidence that outside rankings will be used in anything like the way that PGR is? I don’t recall ever hearing of an undergrad considering a ranking when I was in school. People asked their own profs, asked those profs their profs sent them to, and visited departments. Nor do I think administrations used rankings the way they do now. they might have noticed them, but my sense is that they were not taken nearly as seriously as those that purport to be internal and objective. I’m not presenting more than anecdotal evidence of course, but I’ve not seen any on the other side either, just an assumption that it is inevitable that whatever group makes the rankings they will be used the same way. So I’m not saying Craig’s claim is wrong, just that I’m not, and don’t think others should be, convinced.
    Re P’s claim, it is simply false that Ivy’s dominated the informal advising world pre-PGR. State schools like UCLA, Berkeley, Arizona, Michigan, Wisconsin, Pitt, Chapel Hill, and UMass were uniformly recommended as top grad programs in the early 80s. And non-Ivy privates like MIT and Chicago as well. Only Harvard and Princeton were included in this list from the people I talked to, though Cornell and Penn were mentioned as good second-tier. And there is another really important point here. Granted that there were networks, but there have always been lots of them. I personally got lots of advice from Robert Kraut, Bob Turnbull, and Bill Lycan. They didn’t all give the same advice and in no case did I think of this as more than the advice of someone I trusted. It was not the OFFICIAL OBJECTIVE RANKINGS. It was Bill’s opinion. One collected lots of opinions. One visited schools. That is, the facade of objectivity – I’m puzzled that P thinks that citation, publication etc. metrics are pseudo-science, but a collation of overall opinions by a subset of the field isn’t subject to this epithet – gives advice a sort of authority that it arguably shouldn’t have.

    Like

  18. S7 Avatar
    S7

    I was going to post this on the October statement, but as a non-philosopher (just a fascinated observer of the field), I figured it would not be appropriate for me to weigh in, so I will put my agreement and reasoning here:
    I have seen some very good arguments against the rankings as implemented, but I am curious, what exactly is the argument for reputational rankings’ utility for the students as a whole, even assuming they are implemented in a way that is completely accurate and completely bias-free?
    Let’s say there are 1,000 funded PhD student openings a year. Let’s also say they are all filled every year by 1,000 students (I would assume since there are probably more applicants than funded positions, almost all go filled).
    It is logical to assume that people will only be accepted into programs that offer the specialization they want to study (through basic research on the part of the student, and by filtering out incompatible personal statements by the departments). The only difference reputational rankings do is make the stronger applicants more likely to apply to the better-ranked programs; because of that, the stronger students probably do benefit from the rankings when it comes to their career prospects.
    I would assert, though, that any added utility to the stronger students from the rankings system is exactly equal to the reduced utility to the weaker students, who might have been able to get into a better-ranked program had they not been crowded out by the stronger students. The only way the rankings would be a net positive utility to the students is if there was some moral reason to ensuring that stronger students go to the stronger programs, but I don’t see how there could be. Even if you could argue one student was fundamentally more intelligent than another, it doesn’t follow that they “deserve” to be in the stronger program. And of course, any student who is too weakly prepared will simply not get into the stronger program.
    To sum my hypothetical up, with rankings you get a 1,000 students getting 1,000 positions studying the subjects they want to. Without rankings you still get 1,000 students getting 1,000 positions studying the subjects they want to. Unless you think there is a normative reason to favor some of these students over others in terms of career outcomes (and again, I don’t see how there would be one), there is a net utility of zero for the students.
    So you then look at the impact of the rankings on the profession as a whole, which I think (and others have cogently argued) are negative. So it makes sense to me not to have rankings, though in practical terms I’m not sure if this is feasible.

    Like

  19. Roberta L. Millstein Avatar

    To support your point about pre-PGR advice with my own anecdotal story — I started graduate school in 1990, well before the PGR. My main advisor recommended schools strong in philosophy of science like University of Minnesota (where I ended up going), UCSD, PIttsburgh, and Indiana. Granted, he was a philosopher of science himself, but I remember distinctly bringing my list of schools to one of my other letter writers, a list to which I had added UNNAMED IVY. Upon seeing my list, this professor, who was not a philosopher of science, remarked, “UNNAMED IVY?!?! I didn’t know that UNNAMED IVY was strong in philosophy of science!” And it was true, my reasons for adding UNNAMED IVY to my list were not ones that were relevant to my graduate career. My point is that pre-PGR Ivies were not automatically recommended over non-Ivies (at least in my experience), and that the advice my professors gave emphasized the importance of going to a department that was strong in one’s area of interest. Oh, and I was at an Ivy league undergrad, so I don’t think it was an anti-Ivy bias!

    Like

  20. p Avatar
    p

    I don’t think the point is whether some people(like the people who blog on newapps) like it PGR are not. The point is that many people find it useful. If you do not, nothing is forcing you to use it or to recommend it to anybody. Second, what Mark Lance said actually proves my point. You were getting rankings and recommendations from a bunch of old boys that you happened to know who happened to be where you studied. I came from abroad and knew exactly no one and the only thing I had to my disposal was PGR and the only person who actually replied to my request for advice and offered me advice on applying was Brian Leiter. I did not have either money to visit USA or connections or experiences (I did not even know what Ivy league is). I think you are approaching this from a point of view of people who studied at places that were already pretty well-connected and forgetting about a lot of us who do not come from such places. Second, who thinks that Leiter’s ransking is objective official ranking? One my present it to administration as such to get an additional position perhaps,but I doubt even that. I never heard that before. Third, most of undergrads that I speak to who are interested in applying to grad schools have exactly the opposite attitude of what Margaret Atherton described. If anything, they focus on the specialty rankings and ask about them. They still discuss with the professor and very much do not use it mindlessly.

    Like

  21. Christopher Gauker Avatar
    Christopher Gauker

    Roberta, the September statement contained this line: “With a different leadership structure, the benefits of the guide might be achieved without detriment to our colleague[s].” By this use of the phrase “the benefits” the authors clearly imply that the guide has some benefits. I appreciate the sentiment and the effort, but that implication deters me from signing on. I don’t see that the PGR has any benefits, because there is no reason to believe that it reliably measures what it claims to measure, namely, faculty quality. Some people may believe they have benefitted by it, but I don’t believe them. At most they have benefitted from having an easy-to-use list of links. Some people probably believe they have benefitted from horoscopes, but they are wrong.

    Like

  22. Mark Lance Avatar
    Mark Lance

    Neither Robert Kraut nor Bill Lycan was not old in 1982, though I’ll grant that Robert Turnbull was.

    Like

  23. Roberta L. Millstein Avatar

    The point I am making isn’t that I don’t “like” the PGR. The point I am making is that there is very good reason to think that the PGR has been harmful to philosophy. It’s true that I was lucky enough to have been given good advice (albeit not from “old boys”), and I am thankful for that. However, if you read what I have said above, my proposal isn’t to get rid of rankings and leave grad students with nothing to guide them. My proposal is to get rid of rankings and replace it with something better, something that gives information without attempting to rank that which cannot properly be ranked.
    And yes, the PGR has been used to argue for additional positions; I have firsthand knowledge of that.

    Like

  24. David Wallace Avatar
    David Wallace

    Given your premises, the rankings system makes people who are better researchers more likely to have jobs in places with more research time. So the quality of philosophy research done is going to be higher. Insofar as not everyone gets a job in the profession, the quality of undergraduate teaching is also going to be higher. Both of these are good things, though neither will come out from considerations of benefit to the students themselves.
    (There’s plenty of space to doubt whether this happens in practice but it does given your stipulations.)

    Like

  25. Fritz Warfield Avatar
    Fritz Warfield

    Chris,
    I always enjoy hearing from you about various things — you know that I respect your views even when we find disagreement. So let me try to get more clarity about what you’re saying here.
    Here’s an example of one student I know who believes he benefitted from the PGR. He wanted to study epistemology. He had “no idea” where to apply to graduate school and at the time he was applying had no advisor who provided specific advice. He used the PGR to get some initial ideas about what schools to consider. After considering other factors, including some personal geographic preferences, he applied to a couple of the schools in the PGR top 5 for epistemology and several schools in the next two tiers of the PGR epistemology rankings. He did not apply to schools not listed in the specialty rankings for epistemology. He was admitted at two schools both recognizable by any of us with knowledge of the field as having good faculty in the area and he accepted the offer he thought would give him access to the best training in his intended specialty.
    You think that this student at most benefitted from “having an easy to use set of links” and perhaps benefitted no more from PGR than those who “believe they have benefitted from horoscopes”. Do I have you right on this? My response when learning of this student’s story was instead along these lines: I’m happy you found PGR to help you with your search for a graduate school. You presumably don’t think that the student has ended up in a bad situation do you? I can share more information privately about where the student is studying and how his work is going if you need that information to answer. Maybe you think that the student was simply lucky to end up in a place where his work in epistemology could flourish? He certainly was less lucky than if he’d ended up in a good place after simply guessing where to apply for graduate school rather than using the PGR’s recommendations for the field.
    If you’d rather talk privately about any or all of this, get in touch by email. I hope you’re well.

    Like

  26. Christopher Gauker Avatar
    Christopher Gauker

    Fritz: Thank you for the gentle introduction. I do take a hard line on this. The PGR is primarily an expression of shared biases. Those who are doing the ratings have very little first-hand information on the faculty they are rating. Consequently, their ratings of a department are liable to be strongly influenced by their perceptions of the past strength of the department and the overall quality of the university to which it belongs and, now, by their knowledge of its current Leiter-rank. (I owe the last point to a recent comment by Richard Heck on another blog.) The effect of these perceptions will not be filtered out by the accumulation of ratings precisely because they are shared. I do not have the same objections to the specialty rankings, which your student relied on, because, in the case of those, the raters can be expected to have more of the necessary first-hand information of the few people in each department working in the pertinent area. The comparison with horoscopes was meant to say that a person might use the PGR and be happy with the result, but what might make that more than a lucky break is at most that the PGR provides easy access to departmental websites.

    Like

  27. p Avatar
    p

    Right, I understand that point. But I think this presupposes that students can then go on and rank things themselves on the basis of that data. I am doubtful about that (as I am doubtful about the data itself being useful, but that’s a different issue). This will mean that they will go to their faculty for advise – and then the ranking will be done by that faculty – 1,2, at most 3 people for them. So we are back at square zero. From my point of view, there are basically two positions against ranking. First, the very idea of ranking departments (of philosophy) seems somehow odd, repulsive, un-intellectual, or what have you. But this comes often from (I would venture to say) the very same people who are happy to rank their grad students as they come on the market and who would, when asked recommend some programs,rate some over others. So I do not see a problem with having a “ranking” done by a bunch of well-known philosophers to help me as well. Second, it comes from the people who feel excluded by PGR somehow (some in the SPEP corner, especially). I don’t have much to say about that, besides that I remain to be convinced that history of philosophy done by SPEP like people is a worthwhile way to spend one’s graduate studies on rather than an unfortunate fad in US (I do read some of it that’s relevant to me, and I have been educated in Europe in French structuralism and post-structuralism, so I am not exactly an outsider in this). This is my bias, shared I guess by Leiter and others, but I would be willing to give this one up.

    Like

  28. Eric Winsberg Avatar
    Eric Winsberg

    Only with the added premise that the “stronger applicants” coming out of undergraduate training are bound to become, given equal training, the “better researchers.” Most of the “stronger applicants,” it happens to turn out, at least in the US, are on average folks who went to ivy league schools, elite SLACs, and the like. So there’s reason to suspect that strength of application tracks class and precociousness as much if not more than it tracks potential research talent.

    Like

  29. Roberta L. Millstein Avatar

    I don’t see why the student needs to do any ranking. The student should compile a list of programs that fit the student’s desired profile – this should include looking at the research being done by the schools of interest and the program’s commitment to the student’s area, but also many other desiderata such as placement, etc. One doesn’t get a ranking out of that, but rather ~10 programs which seem like a good fit. The student applies to those schools and then sees where s/he gets in. The student visits those schools and then talks to the graduate students, finds out more about the program, gets a general feel for the place, and makes a decision. At no point is a “ranking” necessary. I never “ranked” the schools I applied to (what would have been the point?), and although I chose one over others, I would not say it was because I ranked it higher than the others, but just that it was a better fit for me. And I believe that students are in the best position to make these sorts of decisions for themselves, given the necessary information to make the decisions.
    I am not “happy” to rank the graduate students coming out of my program, and I have posted elsewhere on this blog about my discomfort with the practice. When I write letters for my students, I do my best to avoid comparing X to Y. (Too much work to dig out the old post, but if you are really curious, I will).
    I can’t speak to SPEP because that is not my area and it’s not an area I know much about. But isn’t it only to be expected that those who are excluded are the most likely to speak up about their exclusion? Suppose we were talking about the exclusion of women, people of color, people with disabilities, or people who are LGBT. They are the people who are going to be (painfully) aware of their exclusion, whereas others might not even notice. For decades this discipline had all male (mostly white, mostly abled, mostly straight) conferences and volumes, and not only did people rarely speak up about it, it was rarely even noticed. Of course men can and do speak up about these problems, but for the most part, it’s women who have taken the lead, especially in pointing out the intial problems. Should women have stayed quiet and waited, hoping that some men would notice and take action? To get back to the point, of course it is going to be primarily SPEPers who speak up about SPEP being excluded. And it is not just SPEP that has gotten short shrift from the PGR, as I note above; it’s pretty much anything that isn’t a LEMM discipline. I am not saying that this was intentional, but it is what happened.

    Like

  30. p Avatar
    p

    I assumed that the choice of schools (3, 5 or 10 of them) is the result of “ranking” them higher than others for oneself. Usually when I choose something rather than something else, I do so because I think it is better. In any case, the point is not about what the student does, but about the usefulness of PGR in helping her to decide.
    Well, one might avoid comparing X to Y in a recommendation letter. But isn’t what we ultimately have to do once we decide to hire C over D ranking them?
    I am willing to go on board with including SPEP, but then I would suggest including other than philosophy departments – i.e., comparative literature/English/French departments and so on, in the survey of places to do PhD in philosophy. I do not see any reason to exclude those either.
    I am not sure that the exclusion of people of color or women from jobs, conferences, and so on is comparable to the exclusion of SPEP departments from Brian Leiter’s ranking. If it is, then Leiter is not the problem and I suggest that the predominantly analytic departments open up and hire PhD from PennState,Duquesne,Emory to be more inclusice.

    Like

  31. David Wallace Avatar
    David Wallace

    Eric: that’s fair; I should have checked the premises more carefully.
    Rightly or wrongly, I took S7 as just saying that even with maximally generous premises, aligning good students with good institutions is ethically neutral because the winners cancel out the losers. My point is that this isn’t zero-sum: the wider goods of research and undergraduate education are helped by a better alignment of students to institutions. You’re probably right that S7’s stated stipulations don’t in fact cancel all the possible ways in which that can fail in practice.

    Like

  32. S7 Avatar
    S7

    David: Yes, that’s what I was saying. I would think that to the extent that there is a correlation between a student’s research skills and their tendency to apply to “high reputation” schools, (a) it wouldn’t be a particularly strong one; and (b) rather than use reputational rankings, a better proxy would just be straight placement data — how many students placed in SLACs, research institutions, other, etc. In some ways I think that would create a better alignment; without the “prestige” distraction (or, with a lessened prestige distraction), students will ideally focus more on what they want to study and what kind of jobs they want afterwards.

    Like

  33. Roberta L. Millstein Avatar

    When you say “‘ranking’ them higher than others for oneself,” first, note that “for oneself” is key here; it suggests (what I think is correct) that the best schools for two people with the same broad areas of interest (e.g., philosophy of science) might not be the same. So, a singular ranking, such as that offered by the PGR, does a disservice by suggesting that some schools are better than others, period, not taking into consideration the different weight that people might put on different criteria, many of which are not quantifiable — or their more particular research interests (e.g., causation vs. realism).
    Second, one need not rank to find a list of ~10 schools. It can be much rougher than that. Some schools will be unnacceptable for one reason or another. Some will be “must apply.” Others will be somewhere in the middle. If a student wasn’t sure if a school was “must apply” or “middle,” I’d say, “apply.”
    Yes, when we hire we must unfortunately make a ranked list. That doesn’t make such lists sensible, or easy to determine. Often one is comparing apples to oranges. Often different faculty rank differently. Often it matters what sorts of things a particular program needs, or whether the person is a good fit in other ways. So yes, one is forced to rank, but it is artificial. One is not forced to rank when choosing a school, thankfully. One need only decide where to apply, and then if one is lucky enough to have a choice of places to attend, one chooses. Often such choices are based on particular factors that are really striking: an outstanding financial package, really “clicking” with a potential advisor, a good “feel” from your campus visit and the department community.
    I am neither arguing for including SPEP nor excluding SPEP. You were the one who brought up SPEP. I said that I don’t consider myself knowledgeable enough to comment.
    I did not say that “the exclusion of people of color or women from jobs, conferences, and so on is comparable to the exclusion of SPEP departments from Brian Leiter’s ranking” (although the two may be more connected than you think). You implied that we should not take the complaints of SPEPers seriously — that they are just complaining because they are being excluded. I was pointing out that we should not discount someone’s complaints about exclusion merely on the basis of the fact that they are the ones being excluded.

    Like

  34. Fritz Warfield Avatar
    Fritz Warfield

    Chris — Your most strident objections, then, are to the overall rankings and not to the specialty rankings right? I wonder what you’d say about an analogue of my real example involving a student’s use of the specialty rankings that played out similarly except that the student used the overall rankings? I guess you would that the hypothetical student was simply lucky if she ended up in a good place using that guidance?

    Like

  35. Christopher Gauker Avatar
    Christopher Gauker

    Fritz, I think I could have been clearer about the comparison to horoscopes. A person might make a decision based on a horoscope and be happy with the result, but in that case the explanation will not be that the person made good use of the information contained in the horoscope, because there was not any such information. Likewise, a person might make a decision based on the PGR and be happy with the result, and that case too I would say that the explanation will not be that the person made good use of the information contained therein, because there is no information therein. I should qualify that to this extent: The features of departments that I take PGR raters to be biased toward might positively correlate with faculty quality. Maybe it’s true that philosophy faculties in historically strong departments, in otherwise strong universities tend to be better. To that extent there might be some “information” in the PGR overall rankings, but a lot of people think the PGR is a much better indicator than that, and I say that they are wrong. It would be an exaggeration to say that your hypothetical student was just lucky, because that student is probably using the links on the PGR to go to department websites and is probably consulting also with his or her mentors.

    Like

Leave a comment