This is a guest post by Mitchell Aboulafia.  It's a bit long, but it's worth reading in its entirety.  I don't necessarily endorse any particular part of it, but I think it makes some excellent points, especially about lack of transparency.  This feature in particular is, in my view, not tolerable given the enormous influence the report has; an influence that is aided, in no small part, by the very cooperation of the group of evaluators that the letter is addressed to.

Mtichell's post comes, in its entirety, after the break:

An Open Letter to Prospective Evaluators for the 2014-2015 Philosophical Gourmet Report–An Update*

Dear Colleagues,

A debate about the PGR and program rankings is now underway in philosophy circles.  Although there has been no organized public forum, this conversation is taking place on blogs, Facebook, and other media.   It has been spontaneous, coming on the heels of criticism of Brian Leiter’s behavior.  The good news, at least to this observer, is that there seems to be consensus on at least one point: the PGR is a flawed instrument.  But there the agreement ends.  There is wide disagreement about the extent and nature of the PGR’s problems.  Some argue that it is so flawed that it’s time to retire it, permanently, while others believe that its problems are minimal and that the PGR should be available for 2014-2015 and the years ahead.  Between these extremes are other views and other options, including a preference for suspension of the PGR for the present in order to address its problems. 

Since we have no hard data, it’s difficult to say what proportion of the philosophical community holds what position.  We can say that the debate has generated an enormous amount of interest.  There are many philosophers who think that we should not proceed with the PGR this year.  Leiter himself conducted two polls, advertised on his site, regarding whether there should be a 2014-2015 PGR.  The first set of results started coming in heavily against proceeding, but Leiter thought there might be something fishy about those results.  So, he offered a new poll through Condorcet Internet Voting Service, which for public polls uses IP addresses to eliminate multiple votes from the same person or location.  When Leiter closed the poll on September 24, 3,424 people had voted.  The question:  “Should we proceed with the 2014 PGR?”  The results:  2,104 No votes, 1,320 Yes votes.**  That’s a little over 61% in opposition, for a poll that Leiter himself conducted and advertised on his site, and which took place before the current debate on the PGR itself gained momentum.  (See Archive of the Meltdown.  It was, for example, in early October that word came that the University of Sheffield decided not to participate.)  It’s reasonable to suppose that Leiter hoped that he would receive support for the PGR when he did the poll.  When he didn’t, he dropped it.  This speaks to one of the PGR’s problems, namely, that its administration is private and very closely held, with very little input of any kind from the broader profession.  (Those who believe that the PGR is Leiter’s personal property, and that philosophers have no business telling him how it should be run because it belongs to him, need read no further.) 

Of course, the internet polls Leiter initiated don’t settle the question of how the profession as a whole would vote, but I think we can say they clearly indicate a good deal of interest in the question of whether the PGR should proceed.  From the poll as well as the anecdotal evidence, we can assume that there are a lot of people out there who would prefer that the 2014-2015 PGR rankings not go forward.

Let’s assume that I am correct that there is a general consensus that the PGR is flawed in some fashion, which even without my appeal to the anecdotal is a reasonable assumption, because, after all, there are no perfect evaluation instruments.  This is a point often made  by some of the PGR’s defenders.  As for the other extreme, I don’t think anyone who has been reading about this controversy in the last month would doubt that there are voices who see the PGR as fatally flawed.

If we grant that this range of views exists about the PGR, then a reasonable question for evaluators is the following:  How flawed would the PGR have to be for you to decide against participating this year, that is, what kinds of problems must it have?  Note that the question is whether you as an evaluator should endorse the current product as it stands by participating in this year’s survey.  I am not raising here the question of whether problems with the PGR can or cannot eventually be resolved.

Let me offer a list, by no means a comprehensive one (and certainly not in rank order), of ten problems that have been raised about the PGR.  It is by no means necessary to agree with all of these criticisms to take a pass on the PGR this year.  Just a few, perhaps even one, will do.

The Philosophical Gourmet Report . . .

  • lacks definitive criteria for the evaluation of departments, for example, what sorts of accomplishments by philosophers should weigh more heavily than others.  
  • lacks public and clearly defined guidelines for its evaluators, for example,  evaluators can choose to rank based on cursory impressions of departments or do a great deal of homework regarding departments, or anything and everything in between.
  • places evaluators in a position of weighing and using criteria differently, creating a survey that not only permits the comparison, in effect, of apples to oranges, but also leaves the selection of particular fruits to the evaluators. (The PGR’s “Methods and Criteria” page speaks of “different philosophies of evaluation.”) 
  • has too few evaluators, selected by too few people, to adequately evaluate many, if not most, of the specializations in philosophy. 
  • imposes its own idiosyncratic cultural preferences on a profession devoted to critical and communal inquiry as the means to truth. The PGR has not sought a public debate on its organization and procedures, in spite of serious concerns about bias, distortion, and unwarranted marginalization.  It is moving ahead with this year’s PGR in spite of obvious concerns of many in the philosophical community and a NO vote from Leiter’s own poll regarding the question of proceeding with the 2014-2015 PGR.
  • promotes a halo effect with respect to departments that adversely affects the job prospects of candidates, that is, reputational pedigrees often trump candidates’ actual accomplishments.
  • is functionally indifferent to the undesirability of the halo effect.  Leiter is candid about the fact that its evaluators have been influenced by a halo effect in the past–"As one respondent put it a few years ago: 'surprisingly tough to say what I think, without the institutional halo effect front loaded.' "  However, the PGR's response, namely, to withhold the names of universities from reviewers, fails to address the halo effect that is created by the presence of a "star" in a particular department.  The “star” effect also renders any claim to anonymity of departments indefensible.  (Evaluators often know where so-and-so "star" is currently teaching, or know other members of a department’s faculty.) 
  • does not have an independent Board with real oversight.
  • fosters a parochial view of the discipline, under the guise of neutrality.  In Bharath Vallabha,’s words, “[i]nstead of having different ecosystems where some value Princeton as the best department, and others value Notre Dame as the best, and yet others value Vanderbilt as the best, and so on, PGR fosters the idea that there is one over-arching sense of what are the best departments, and those are the ones which are ranked by PGR and in particular the ones which are at the top of those rankings.”
  • does not have a sufficiently diverse pool of evaluators, especially when we consider not only their present institutions, but where the evaluators received their graduate degrees.  (See, for example, Bharath Vallabha’s “The Function of the Philosophical Gourmet Report:”  “The surveys don't aim to capture what a broad range of philosophers – those associated with both ranked and unranked schools – think. Rather, they capture what people who have passed through, or are affiliated with, ranked programs think is the ordering of the ranked programs.”)

I am convinced that rankings in general do more harm than good.  A comprehensive informational website with a sophisticated search engine would be my preference, one that would allow individuals or departments to generate customized rankings based on selected criteria.  But I will not try to convince you of this here.  Instead, I want to run some numbers by you, prospective evaluators, and ask that you consider them, in light of the problems with the PGR, before deciding to fill out this year’s survey. Many invited evaluators do not fill out the survey when they receive it, and there are good reasons for you to take a pass on it this year.  Here’s why:

According to Leiter, he is currently working from a list of 560 nominees to serve as evaluators for the 2014-2015 PGR.  During the last go-around in 2011, 271 philosophers filled out the part of the survey dealing with overall rankings, and “over 300 philosophers participated in either the overall or specialty rankings, often both.” Leiter claims that in 2011 the on-line survey was sent to approximately 500 philosophers. So, many philosophers decided NOT to fill it out even after receiving it.   (Also notice that from the information he provided we don’t know how many filled out the portion of the survey dealing with specializations.  All we know for certain is that it must have been at least 30, that is, 271 + 30=over 300 hundred.)

Three hundred may appear to be a reasonable number of evaluators, but the total number of participants obscures crucial details, and one doesn’t need any sophisticated form of statistical analysis to see how problematic these are. If you look at the thirty-three specializations that are evaluated in the PGR, slightly more than 60% have twenty or fewer evaluators. That’s right, twenty or fewer. Think about this for a moment:  twenty or fewer philosophers, in one case as few as three, are responsible for ranking 60% of the specializations found in the PGR, what many consider to be the most important feature of the PGR.

But it is actually worse than this.  There are certain areas that have many fewer evaluators than other areas. For example, the PGR lists nine specializations under the History of Philosophy rubric. Six of the nine have twenty or fewer evaluators. One of the specializations, American Pragmatism, has only seven.  The only general category to have the majority of specializations with more than twenty evaluators is “Metaphysics and Epistemology.” Five of its seven specialties have more than twenty.  But none of the others–Philosophy of Science and Mathematics, Value Theory, and the History of Philosophy—have a majority of specializations with more than twenty evaluators.  In the three specializations outside of these rubrics we find this: eleven evaluators for Feminism, three for Chinese, and four for Philosophy of Race.

The case of Chinese Philosophy is noteworthy.  In 2009 Manyul Im, a supporter of Leiter’s rankings at the time, posted a blog about the rankings of Chinese Philosophy, and let Leiter know that he was going to publish the post.  Professor Im had two basic complaints.  For Chinese Philosophy, according to Im, the rankings were misleading about the differences in quality between programs, and the pool of evaluators was too small and their backgrounds were too similar.   He argued against ranking programs in Chinese Philosophy and instead recommended “an informative list of viable programs.”  The 2011 PGR did reduce the number of ranked groups in Chinese, from four to two, presumably to help address the concern about reading too much into the differences between groups.  However, the PGR still uses the same three evaluators, two of whom Im pointed out were students of the same person, while all three share a similar “interpretive paradigm.”  Here’s an instance of the arbitrariness of the PGR as a ranking system: in the same year, 2011, the Philosophy of Race, with four evaluators, one more than Chinese, received the whole megillah, and has five ranked groups.  Go figure.

But you don’t have to take my word about the small number of evaluators for the specializations.  Here’s what Leiter says on the 2011 PGR site.

Because of the relatively small number of raters in each specialization, students are urged not to assign much weight at all to small differences (e.g., being in Group 2 versus Group 3).   More evaluators in the pool might well have resulted in changes of .5 in rounded mean in either direction; this is especially likely where the median score is either above or below the norm for the grouping (emphasis in the original).

Obviously, urging students “not to assign much weight at all to small differences” does not address the issue. No weight should be assigned to specializations ranked by so few people. This is not rocket science. This is common sense. You can’t evaluate the quality of specializations that have so many facets with so few people, who themselves were selected by another small group of people, the Advisory Board, which clearly favors certain specializations given the distribution of evaluators. (This is especially true when there hasn’t been a public discussion about what should constitute standards for rankings of specializations in philosophy.)  Leiter’s advice makes it appear that one should take the specialty rankings seriously if only we refrain from assigning too much weight to small differences. But if this is right, why didn’t Leiter take Professor Im’s advice and not have any rankings in Chinese Philosophy in 2011?  Two groups with three reviewers is absurd by Leiter’s own logic, but there it is.  My hunch is that, as silly as the rankings of Chinese Philosophy into two groups may seem, had Leiter not done it this way, the door would have opened for those in other specializations to argue for Im’s suggestion of “an informative list of viable programs.”  But this is anathema to the rankings mentality of the PGR.  Everything—everything—is independently better or worse than something else, in PGR-land.  Once you start to rank, it’s rankings all the way down.  For example, the PGR will use them even when it won’t show the scores, “due to the small number of evaluators”:   

Chinese

I honestly don’t know how one could fill out the survey this year in good faith knowing that so few people are participating in ranking so many specializations. When you fill out the survey you are making a statement. You are providing your expertise to support this enterprise. The fact that you might be an evaluator in M & E, with more evaluators than the other areas, doesn’t absolve you of responsibility as a participant. At minimum, you are tacitly endorsing the whole project.

Ah, you say, but perhaps this year’s crop of evaluators will be more balanced. However, the way that the PGR is structured undermines this hope. The evaluators are nominated by the Advisory Board, which has roughly fifty members. Most of the same people are on the Board this time around as last time. But here’s the kicker: Leiter asks those leaving the Board to suggest a replacement.  The obvious move for a Board member here would be to nominate a replacement in his or her own area, probably from his or her own circle of experts. In Leiter’s words, “Board members nominate evaluators in their areas of expertise, vote on various policy issues (including which faculties to add to the surveys), serve as evaluators themselves and, when they step down, suggest replacements.” So there is no reason to believe that the make up of the pool of evaluators would have markedly changed since the last go-round.

The 2014-2015 PGR will be in place for at least the next two years, maybe longer given the difficulties it faces. There are a lot of young people who will be influenced by it. If you have been invited to serve as an evaluator for 2014-2015, please consider taking a pass on filling out the survey. If enough of you do so, the PGR will at minimum have to reform its methodology.

Given the recent and continuing publicity surrounding the PGR, it’s also important to consider how it may be used against the interests of philosophers.  Faculty members from other disciplines are already discussing its flaws on the web.  This will only increase as the dispute within philosophy about the PGR intensifies.  The cat is out of the bag.  Those who try to use this flawed ranking system will soon be challenged by savvy chairs in other departments, either directly or behind closed doors in discussions with deans and provosts.  We should try to avoid the embarrassment of having people outside of philosophy, especially those who are familiar with survey methodologies and related data collection, discover our support for such a compromised ranking system.  Taking a pass on this year’s PGR is not only the right thing to do, it is prudent.  But only you, the invited evaluators, get to decide whether the PGR is too flawed to endorse by filling out your surveys.  The rest of us have no say whatsoever in this decision.   

Three disclaimers:

1) In this post I purposely sought to keep the statistics as simple and straightforward as possible in order to raise basic questions about imbalances and sampling size in the current PGR.  Gregory Wheeler has a nice series on some of the more in-depth statistical work in “Choice & Inference.”  See the series and concluding piece, “Two Reasons for Abolishing the PGR.”

2) If there is available public content regarding changes to the PGR that I have missed, I’d be grateful for a pointer in that direction.  As far as I know, no fundamental change is taking place in the 2014-2015 PGR.

3) I have calculated the number of evaluators in the different categories as best I could from the information available to me.  Any errors, to the best of my knowledge, are small enough that the case I make here stands in any event.

___________

*A portion of this post originally appeared on UP@NIGHT, October 19, 2014, in a series of pieces dealing with rankings and the Philosophical Gourmet Report. 

** Leiter’s words announcing the second poll, “So here's a different poll service, which in the past has done better at blocking strategic voting.  Here it is.  Rank "Yes" #1 if you want the 2014 PGR to go forward; rank "No" #1 if you do not want it to go forward.  We'll see how the two come out.”  I can’t find anything on his blog about the poll after this.  If there is a public response Leiter made to the negative outcome, I would like to hear about it.  

Posted in

27 responses to “An Open Letter to Prospective Evaluators for the 2014-2015 Philosophical Gourmet Report (A guest post by Mitchell Aboulafia).”

  1. Neil Sinhababu Avatar

    “How flawed would the PGR have to be for you to decide against participating this year, that is, what kinds of problems must it have?”
    One possible answer: “To make me not participate, there would have to be a better publicly available system of rankings I’d support instead. And it’s clearly the least-flawed public system we’ve got.”
    If it’s the least-flawed ranking system we have, it may be worthwhile to make it better, even if its flaws are very deep. Then people who want to consult rankings will be guided by better advice. And if one’s efforts lend some credibility to the PGR, that’s better than having the credibility go to the QS rankings or the for-profit consultants at Academic Analytics, who will make up their own rankings and sell them to your university administrators.
    Is there an existing system of rankings that has a larger number of philosophers contributing rankings, or has more definitive criteria, or a more independent board? (I can’t take issue with the board’s independence, as they made a big change in the management of the rankings quite recently.) If not, participating in the PGR may be the best option, even as imperfect as it is.

    Like

  2. Jessica Wilson Avatar
    Jessica Wilson

    Three points.
    First, I cannot see why having a publicly available system of—let’s not forget, broadly linear—rankings is supposed to be a primary desideratum. There is such huge diversity in topic, canon, and methodological standards, even within specializations, that the idea that departments, areas, or philosophers can be shoved into rank-and-file ordering is ludicrous. Moreover, we have good reason to believe that any present attempt to either subjectively or “objectively” (e.g., by attending to publication venue or honorary positions, etc.) rank departments, areas, or philosophers will encode and perpetuate the implicit bias that is a pervasive and hugely destructive aspect of our profession.
    Second, even granting that there is a positive aim in sight here (e.g., helping students select graduate programs suited to their interests and inclinations), it’s transparently problematic to reason that in service of some positive aim, we should implement a deeply flawed methodology on grounds that no better means is available. There are really two problems here, both illustrated by the fact that it’s not OK to torture someone, even if that’s the only way to get some information: here the methodology itself has serious negative consequences, and not unrelatedly, the information gained is deeply compromised. I don’t see that the “there’s no better alternative” reasoning in support of continuing on with the presuppositionally flawed and implicit-bias-encoding-and-perpetuating PGR rankings is any better, even if the outcome is not as egregious.
    Third, in any case there are better (if not perfect) ways of achieving the positive aim of assisting would-be graduate students; namely (as Mitchell notes) via a comprehensive informational website with a sophisticated search engine, allowing individuals or departments to generate customized rankings based on selected criteria. I understand that the APA has something like this in the works.

    Like

  3. Eric Winsberg Avatar

    New blog post: “Jessica Wilson compares PGR to torture”!
    Seriously though: great points. Arguments from being the least flawed existing system are a bit weak. Especially when we recall that the whole recent “fracas” started from the perception that Brian Leiter was using the power that being editor of the PGR gives him to attack folks who were trying to create less flawed alternatives.

    Like

  4. Neil Sinhababu Avatar

    Perhaps it’d be better to have no rankings at all. But “no rankings” is not a live option — if we don’t rank ourselves, QS and other media organizations will come up with shoddy rankings so they can sell ads, and use their media power to promote them heavily. Our real choices are to be ranked by others who may not care about philosophy, or support our own ranking system against others’ bad rankings. I choose the second. Do you choose the first?
    I acknowledge the biases in the PGR. But I’d rather take my chances with those biases than the biases of university administrators who may know nothing about philosophy. (People talk about prospective grad students a lot and I hope the PGR is useful to them, but we really need to talk more about how to control administrators.) Deans may like the idea of having a top philosophy department, but have nothing but their biases as a guide about how to get it. The PGR provides a useful counterweight to their views.
    I’d like to see such a website too! But nobody should think that scrapping the PGR and setting up such a website would get rid of rankings. (See 1 — those who don’t support a ranking system for themselves are ruled by the rankings of others.)

    Like

  5. Robert Gressis Avatar
    Robert Gressis

    One argument I see PGR defenders often make is that if you don’t have the PGR, then prospective graduate students will be at the mercy of their faculty members for guidance about which programs to apply to, and that the PGR at least offers an alternative view for students to consider, so we should have it. What’s the main response to this argument? Is it just that the PGR’s negative effects outweigh this positive effect, or is there another response?

    Like

  6. Marc Moffett Avatar
    Marc Moffett

    Jessica Wilson has convinced me that the ranking process is probably irredeemable, and her point above about a strict ordering of programs is important. But suppose we have a PGR-like ranking. It is not clear to me why the evaluators need to be nominated and screened. Why isn’t every person who holds an an academic position (TT or not), allowed to evaluate? In particular, it does not seem to me that the fact that someone is highly research active is either necessary or sufficient for being a good or bad evaluator. Indeed, there are certainly benefits in objectivity from people who don’t have so much skin in the game. Being more catholic in the PGR evaluators would help to address some of the (excellent) points about the specialty rankings. However, it would also bring into high relief some of the other problems of commensurability of evaluation standards. I haven’t read everything on this brouhaha, and would be interested to hear if there is a defensible rational for the kind of evaluator selectivity Leiter implemented. (From that, you can surmise, that I don’t find Leiter’s own rationale particularly compelling.)

    Like

  7. Jessica Wilson Avatar
    Jessica Wilson

    Thanks, Eric (and also for the inoculation).
    Neil, in re 1: I think ‘no rankings’—at least no rankings that anyone takes seriously—is a live option. So what if media outlets or whatever come up with shoddy rankings? I don’t think many pay any attention to these, but even if some do, the right response is not to ourselves try to rank the unrankable, encoding and perpetuating implicit bias in the process. The right response is rather to set about the task of educating administrators, graduate students, search committees and others about the deep-tissue diversity of our discipline.
    In re 2 and 3: Same thought.
    Robert: Speaking for myself, I see a number of responses, including that (1) the rankings output is so methodologically flawed and so infected by implicit bias that the value of the input is next to nil, and indeed might impact the student’s decision in clearly problematic ways (e.g., leading students to go to “higher ranked” schools since Rank Uber Alles, even though a lower-ranked school would have been better for them, other or all things considered); and (2) there are other ways of overcoming “local faculty bias”, including the informational sites discussed above, blog posts by graduate students, etc.
    Here I think it’s worth remembering that people thinking about going to grad school in philosophy are not children. They are adults who are moreover typically very smart; as such, they are capable of doing the sort of research about what faculty members where are doing what, what the fellowship packages are like, what the job placement records at a given department are, etc., that could provide appropriately substantive input that could and should enter into these sorts of decisions. This sort of investigation—which again, a smart undergraduate is perfectly well capable of carrying out—is worth a great deal more, or so it seems to me, than the off-the-cuff evaluations of a few strangers.

    Like

  8. Ed Kazarian Avatar

    Niel asks: “1. Perhaps it’d be better to have no rankings at all. But “no rankings” is not a live option — if we don’t rank ourselves, QS and other media organizations will come up with shoddy rankings so they can sell ads, and use their media power to promote them heavily. Our real choices are to be ranked by others who may not care about philosophy, or support our own ranking system against others’ bad rankings. I choose the second. Do you choose the first?”
    Yep, I totally choose the second. I think that was one of the things I was explicitly arguing for in my post before this whole debate really took off, and nothing I’ve seen said since has changed my mind one bit. I would far prefer that rankings be done by people outside philosophy, with as little demonstrable connection to philosophy as possible, so that we can dismiss, disavow, and ignore the heck out of them as much as possible. Does this, by itself, solve the problem of the administrative demand for rankings and other metrics, no matter how pitifully invalid and laughable? No. But I think we move in the right direction by refusing to lend our own effort to satisfying such demands, especially since that leaves us in a better position to object if/when the nonsense is used against us. As I put it in that post: “why are we so determined to give administrators a weapon to use against us?”
    I’d also argue that ‘this is the best ranking we have’ ignores the by now fact that it’s a craptastic ranking—and that it is so for pretty much inevitably structural reasons. It’s partial, incomplete, subject to massive and obscure biases, and it’s incredibly unclear what, if any, form of philosophical ‘quality’ it actually measures. If this is the best we can do, then I repeat my question from weeks ago, namely why in God’s name are any of us legitimating this stuff by lending our labor to its production.
    Finally, to return to the issue of ‘but administrators are going to use them anyway’: perhaps the solution to that is to spend rather less effort trying to rank order ourselves and rather more effort building the sorts of institutions of collective solidarity that would support and enable significant resistance to these demands. Imagine if we were less interested in ranking and in determining precise scales of merit and more interested in mutual support and encouragement, and in contesting the exploitation and marginalization of any members of our profession within the academy at large?

    Like

  9. Amy Lara Avatar
    Amy Lara

    I agree with Marc Moffett. Some have argued for the PGR on the grounds that it provides good sociological data about what “the profession” thinks about different departments. Since opinions about the relative quality of different departments might influence hiring decisions, prospective grad students have an interest in knowing what these opinions are. But doesn’t it follow, then, that the most useful data would be data gathered from the entire profession? We all hire, after all. Or do grad students only want to know what people who are hiring for more “desirable” jobs think? Elite grad students would likely prefer those data to more general data, but that makes me think Bharath Vallabha was right that the point of the PGR is really to serve the interests of elite grad students.

    Like

  10. Neil Sinhababu Avatar

    Ed, thanks for responding to the issue regarding administrators, which other people haven’t addressed. I’d be interested in hearing more about the “institutions of collective solidarity” that you describe. At this point, I’m not seeing how they could become sufficiently powerful to stop administrators from trusting the rankings of for-profit consultants and media outlets with significant promotional machinery. And even if you mobilized against these enemies, the biases in the minds of administrators themselves remain as a silent killer. But maybe you have some interesting proposals to offer here.
    Publicly available rankings by philosophers are the best counterweight I can see to to all these baneful forces. All of Mitchell’s arguments (small number of evaluators, halo effects, no definitive criteria, etc.) apply even more forcefully against leaving power in the hands of the consultants, the media, and administrators than against the PGR itself and its hundreds of actual philosophers. I think the PGR is far from the best way we could do rankings, and I plan to write some things laying out a way we could build a much more democratic ranking system soon that avoids many of our current problems. But undermining the PGR just seems like a way of giving power to forces that have all the problems the PGR has, only more so.

    Like

  11. Bharath Vallabha Avatar

    Neil, how do the PGR rankings help the hundreds of departments that are not ranked by PGR at all and which are the ones most susceptible to administrative forces using outside rankings? The PGR rankings track where the most prestigious philosophers are; and since departments outbid each other financially to get prestigious philosophers, the rankings track, among other things, which departments have the most money. Ironically, the rankings most help the departments which are most secure. The departments which are in the most immediate danger cannot afford prestigious thinkers, and so they can’t make it into the ranked programs, and so PGR reinforces the sense that these departments are not that good and are expendable.
    Here is an alternative (which is not going to happen anytime soon, but is interesting to think why it wouldn’t). Suppose current philosophers at the top PGR ranked programs leave to go to unranked programs. Then we do a reputational survey. Assuming such a survey tracks the reputation of the philosophers’ work and not the department they are at, the survey results shouldn’t look all that different from the current PGR rankings. But in this alternate case, the departments most in need of financial support would be the ones at the top of the rankings, and so can resist better the pressures from the administrators. Since there is no lack of talent in philosophy, the departments with the most resources can still hire good people. Unlike this imagined scenario, in which faculty reputation would be used to help the hardest hit departments, the current PGR doubles down on the most well off departments. Given that PGR surveys are filled out by people who are, or have been, associated with the ranked programs, it looks like the PGR is a way of the ranked programs protecting themselves at the cost of the unranked programs.

    Like

  12. Neil Sinhababu Avatar

    Bharath, the way I’m seeing the problem you describe is that the PGR’s umbrella of protection doesn’t extend beyond the top 75 or so departments (50 US, 25 or so internationally). The solution, I think, would be to rank more departments and expand the umbrella. This probably isn’t feasible under the current overall-rankings-first system, since nobody can rank 150 departments overall. But a specialty-rankings-first system might be able to give 120th ranked departments some of the credit they deserve, since they’d be on the map in a couple specialties. That’s actually the system I favor as a replacement for the overall-rankings-first current PGR. And the way to achieve it is not by trying to undermine rankings, but by expanding them.

    Like

  13. Mitchell Aboulafia Avatar

    Neil, Jessica has ably addressed the red herring of others producing shoddy rankings as a reason to keep the PGR. I want to delve a bit further into the issue. I have now heard the argument that we should keep the PGR because we will be ranked or evaluated anyway–and it will be far worse if others rank us, catastrophic in fact–too many times to count. I have to come think of this, somewhat sentimentally at this point, as the argumentum ad apocalypse or the chicken-little defense of the PGR. I read what you say as a version of this defense. I will state flatly, it’s wrong, and for multiple reasons. Here are a few points to consider.
    1) The PGR has sold itself as a ranking system extraordinaire because it is based on the evaluation of philosophers by philosophers. This is intended to make it more compelling than say, for example, a ranking done by U.S. News. The “by and for philosophers” gives the PGR a unique kind of legitimacy.
    2) But if this is so, and if, for the sake of discussion, we say that the methodological problems of the PGR are comparable to those of other evaluation systems, then the inadequacies of the PGR would have more dire consequences because of its unique status. The rankings are taken more seriously, and therefore we must be more vigilant about its flaws than other systems. Has the PGR been extra vigilant? I think not.
    3) However, there is no reason to assume that all other rankings, ratings, or evaluation systems would in fact suffer from as many or the same types of problems as the PGR. The PGR has very specific failings and people who know how to develop surveys can easily point them out to you. (See link below to a short article by Zachery Ernst.) The flaws I mention in my post are not found or are not found to the same extent in all surveys. (I am not endorsing rankings here, just making it clear that one can’t argue for the PGR by saying, ‘Oh, everything else would be worse or have the same problems.’)
    4) It’s one thing to have to argue against flawed systems that might be produced by U.S. News and quite another to provide cover for a flawed system with our imprimatur. We need to have some professional integrity here. Philosophers should struggle against misleading rankings systems, for example, by supplying administrators with arguments and information, as Jessica points out, not by offering a system we know is problematic.
    5) There are significant downsides to continuing with the PGR in its current form. I can tell you that if I were the chair of a department competing with a philosophy department for funding, and I knew that the philosophy department was using an instrument as flawed as the PGR to make its case, I wouldn’t hesitate to point this out to a dean or provost behind closed doors. Not only out of self-interest, but because I would see it as unfair. The philosophy department is using a bogus instrument to receive funds that I believe my department has legitimately earned. (This is not an idle speculative scenario. Look on the web. People in other disciplines are becoming more aware of the weirdness and problems of the PGR. And those of us who know how problematic the PGR is will not drop our brief, not this time around.)
    5) There are departments that have managed to survive, flourish, and place students in good jobs despite being dismissed by the PGR. I am not saying that they haven’t been hurt in some fashion, or that their graduates are not at a disadvantage on the job market, but somehow they have managed, not only without rankings but in the face of a rankings system that is actively hostile to them! Philosophers have to stop living in fear of going without the PGR’s rankings. It has become a bad addiction. We know how to argue. We can make our case based on real data, e.g., faculty publications and placement records of graduate students, etc. Other disciplines do it all of the time. Are we afraid that they can produce better arguments than philosophers?
    6) If as you admit the PGR is biased, that it might be fixed by expanding the number of departments (although I don’t see how this is going to address the issue of the closed pool of the evaluators in the present system, as Marc and Amy note), then I say, why not follow the suggestion in my post, suspend the PGR this year? Don’t put out such a poor product. Take some time and see if it can be fixed. The sky won’t fall.
    However, I don’t think that my suggestion of suspending it will appeal to many of those who are benefitting from the PGR as it stands. They are afraid that if we suspend it for a while and take a hard look, we might decide that it needs to be modified or even closed down. In order to prevent this, in addition to the argumentum ad apocalypse or the chicken-little defense, another strategy has arisen of late from PGR defenders. Supporters now acknowledge PGR is flawed, but only mildly flawed, something “we” can live with for the benefits it provides. Well, perhaps they can live with it—for the reasons suggested in Bharath’s remarks—but those being harmed by it cannot.
    https://www.dropbox.com/s/qd9gdl7ozofhit0/emperor-1.pdf?dl=0APS_DEA_Science_Final.pdf

    Like

  14. Bharath Vallabha Avatar

    Neil, I think on the speciality-rankings only model you are suggesting departments with less resources will become identified with just the specialities they are ranked in. The administrators can say only the positions that are ranked can be kept and the other positions in the dept are expendable. This would radically alter the kind of education many of the unranked depts currently offer, which tend to be not tied as much to specializations as defined in the “top” programs, and that is a great thing about them.
    Using any version of the PGR as an “umbrella of protection” seems to me problematic. One can expand the number of departments ranked, the number of evaluators, make it more sub-field focused. But what is retained is a basic top down approach that says that NYU or Pittsburgh are the best models of what philosophy departments can look like; that less well off depts need to emulate them to survive. This strikes me as the well off departments using their material and social advantages to impose their departmental culture and self-conceptions on less well off depts.
    In order to think about how depts can resist administrative pressures on them, there has be an open, honest conversation in the profession about the wide differences in material resources, as well as philosophical conceptions. Rankings in some sense can play an important role in such a conversation. But PGR is, and has been, a main obstacle to such an open conversation, because it creates a false sense of understanding the layout of the profession. A break is needed from PGR to better understand just what the on the ground conditions are at most departments.

    Like

  15. ck Avatar

    Thanks for your post, Mitch. I am sympathetic but I think you are being unnecessarily unpragmatic about all of this. You write in your most recent comment, “I don’t think that my suggestion of suspending it will appeal to many of those who are benefitting from the PGR as it stands.” I agree. I also think that’s a more gigantic deficit in your argument than you are giving it credit for. We can either dismiss ‘those who benefit from it’ (which I take as a proxy for the PGR Board) as a ruthless cabal or we can try to be both charitable and realistic about the matter, by remembering that they are our colleagues after all. Even though many of them might have a different vision of philosophy than I do, I am not so self-convinced as to my vision to just dismiss them out of hand. So what do we do from there? Try to engage them a little more seriously, perhaps.
    One thing I haven’t seen in all of the recent discussions is a good proposal from those frustrated with the PGR concerning how we can better engage with those running the PGR to change things. Or how we can develop alternative listings of grad programs that don’t presuppose the abolition of PGR? (Eric S. posted on this in his note, and I criticized him for possibly not being serious about it, but in fact from his reply it seems like he is rather serious about it — which is something we should welcome!) I will say that I have seen good proposals for providing better information on placement. Such proposals might benefit from working with the PGR (hey maybe they can add a link to those placement rates??) rather than against them. Calling for abolition in this case just seems rather extreme (analogizing to torture, really??) given the unexplored territory now opened up by BL’s promised departure.

    Like

  16. JDRox Avatar
    JDRox

    I agree with Neil that there will still be rankings. Maybe it won’t be US News, but we will be ranked. Indeed, I’m surprised that Mitchell denies this, since his post seems to approvingly quote Bharath Vallabha saying that different cliques would rank different schools differently. That’s exactly one of the things Neil is worried about! Of course, Neil’s cliques are administrators, while Vallabha’s cliques are groups of philosophers, but I don’t think that will make things much better. If there are many slightly or significantly different rankings–or, as some might put it, many slightly or significantly different sets of biases in play when it comes to hiring, graduate school admittance, grant evaluations, etc., why think that will produce a desirable result? It seems better to have one public and regulated set of biases than for every department to have their own idiosyncratic biases of which most people are unaware. Basic human psychology tells me that departments, search committees, etc. will all still have opinions about which schools are better than others. Those opinions will affect hiring decisions, graduate school admittance decisions, grants, etc. By my lights, it seems much better to have one quasi-official opinion that is publicly known than to have it all be a big mystery. Focusing just on prospective graduate students, here is a simple argument supporting the PGR (or some ranking scheme) that I’ve never seen rebutted:
    1. The quality of one’s graduate education depends in large part on the quality of one’s peers–the other students in the program.
    2. The quality of one’s peers at a program will depend in large part on how hard it is to gain admittance to that program.
    3. Hence, the quality of the graduate education one can expect to receive at a program depends on how hard it is go gain admittance to that program. (by 1&2)
    4. How hard it is to get into a program depends on that program’s perceived quality.*
    5. Hence, the the quality of the graduate education one can expect to receive at a program depends on that program’s perceived quality. (by 3&4)
    6. Prospective students have a vested educational interest in the quality of the graduate education they will receive (in addition, of course, to their job prospects, which points towards a distinct and more discussed argument).
    7. Hence, prospective students have a vested educational interest in the perceived quality of philosophy programs. (by 5&6)
    8. Prospective students also have a vested financial interest in how hard it is to get into a program (so they don’t waste money applying to places that are out of reach).
    9. Hence, prospective students have a vested financial interest in the perceived quality of philosophy programs. (by 4&8)
    *Argument for (4): Essentially, schools try to admit the best students they can, and they have at least some success at it. I am talking about overall patterns here. Likewise, the best students apply to (and will choose to attend) the schools perceived to be the best (perhaps overall best, or best in their projected area, or some balance of the two). Hence, the best students will, in general, tend to go to the schools perceived to be the best. (Again where, if you’re interested in applied ethics, e.g., Bowling Green counts as one of the best schools.) A school that is generally perceived to be bad just cannot have high admission standards (assuming they must admit some students from time to time). Likewise, a school that was perceived to be good but tried to have low standards for admission would fail: they would be flooded with good applications and so their admission standards would be de facto high.

    Like

  17. Bharath Vallabha Avatar

    JDRox, Assuming ranked programs have slots for 10 grad students a year (this is being generous), that is 500 grad students a year in the US who get to have the “best” education. Assuming in a given year a ranked program has 3 faculty openings (very generous), that is 150 grad students who can get jobs at ranked programs, and so 350 grad students from ranked programs who have to get jobs in unranked programs. If we assume that the ranked programs form an ecosystem in the profession, that means without the PGR, this ecosystem cannot get jobs within itself for 70% of its graduate students. Even assuming the 30% that gets the jobs in the ranked programs are objectively the best, that still leaves a big problem for what to do with the remaining 70%.
    PGR wasn’t consciously created as a way to rectify this situation, but it is amazing how convenient it is for the ranked programs. The rankings which altruistically help students be part of the “best 500” graduate students each year also enables 3/4 of those students to leverage having been part of the best 500 when they are on the job market. Particular individuals can “win” by using PGR to initially be part of the best 500 and then later using that fact in getting a job, but the system that enables them to win is also slowly but surely contributing to making things harder for departments outside of the ranked programs.
    The benefit for an individual of using PGR re grad school doesn’t come for free. It comes at the cost of contributing to a system which is vastly unfair to most philosophy departments, which are unranked and which therefore are the first to face the budget cuts. Just as evaluators for PGR surveys face a tough decision, so do prospective students. The alternative to PGR doesn’t have to be the older system of privileges, but can involve thinking about the well being of the profession as a whole.

    Like

  18. Bill Wringe Avatar
    Bill Wringe

    There seems to be an assumption in this argument that ‘quality’ in an applicant is a scalar quantity; and also that the quality of one’s peer group is simply a function of the quality of individual members of that peer group. The first is obviously wrong, and I suspect that the second is as well.

    Like

  19. Mitchell Aboulafia Avatar

    JDRox: I want to thank you. You have made my case for me. First, I believe that the question about the PGR being better than nothing or some mystery alternative has been answered. If it’s a flawed, biased, and a compromised instrument with the imprimatur of philosophers, it’s not better. It’s an embarrassment. We can fight bad surveys–other disciplines do. But we shouldn’t be supporting one. Period.
    Second—and this is where you make my case for me—is in your appeal to perceived quality, saying that perceived quality will bring better students into a program. I was under the impression that ever since Plato dealt with the sophists the idea was not to go with perceptions but truth, or at least assertions that we could warrant. I don’t think we want to go with perceptions, for many reasons. Here’s one: they could be wrong. The PGR is designed in such a way that it’s almost guaranteed to generate faulty perceptions. Here is some empirical evidence:
    While it’s often difficult to compare students in programs because of their different interests, one common measure, its ultimate accuracy aside, is GRE scores. When I was Head at Penn State, a so-called “continental underground” program, graduate student GRE scores were comparable to those at top PGR schools. It was not unusual for us to have students with perfect or near-perfect scores. Penn State has gone through some changes since then so I can’t tell you where they are today on the GRE front. However, if you check out some of the other schools in that “underground,” Stony Brook and Boston College, for example, you will currently find high test scores. Your argument, with its obliviousness to this phenomenon, has made my case for me. It is an instance of the ignorance of non-PGR programs that the PGR fosters, leaving people to make the kinds of assessments that you have made based on bad information and misperceptions. We shouldn’t have perception machines, and definitely not ones that have many damaged parts. The PGR is such a machine.
    CK: I see that you are sympathetic, and I understand that you are trying to be even-handed. But the claim that I am not pragmatic enough is flat wrong, unless you mean by pragmatism deference to the status quo. The symmetry you’re suggesting here is bogus. The issue isn’t that these folks have a different vision of philosophy than I do. I take the pluralist stuff seriously. The problem is that they are forcing their vision on the profession and won’t debate it, with very rare exceptions. How many members of the Advisory Board have joined this conversation? You say, “Try to engage them a little more seriously, perhaps.” Really? This has been the problem? Many people, in several different venues, have tried to foster a broader public discussion of the desirability and workability of program rankings in philosophy. My first post on the topic was a plea for discussion and debate. I said, “I am not asking that those who believe in rankings accede to my feelings or brief critical comments. . . . I am asking them to be willing to behave like philosophers, that is, engage in a real public debate.” http://upnight.com/2014/10/01/rank-and-yank-whats-next-for-the-philosophy-rankings-game/
    You say they are our colleagues; well, maybe. But for me the definition of a colleague includes sufficient respect for people outside one’s circle of friends to talk openly about shared concerns and interests. The silence from Advisory Board members is deafening. I’m not saying it’s a cabal, but at this point, the onus is on them to reassure us here. Participate in the public conversation we’re having about the PGR—then we’ll know they’re colleagues. We can see this in the insistence that those of us opposed to the PGR instantly produce alternatives: it’s not just our job! Several people have described the possibilities at length, and offered to discuss them with people interested in their virtues. The problem is that with the PGR sucking all the air out of the room, it’s going to be hard to get the alternatives off the ground. Responsibility for engaging in this broader conversation falls on both sides.
    Again, the dispute is not about different philosophical visions. The PGR is deeply flawed methodologically. But the folks in charge don’t seem to care, or they persist in rationalizing the problems away. They are proceeding full speed ahead. Several of us have asked for a pause, a suspension to discuss the problems. For Leiter, Brogaard, the Board, and apparently some evaluators, no go.
    Last point: Leiter is not leaving. Have you heard a word from Brit Brogaard? Further, I haven’t seen any evidence that Brit has different views from Leiter’s about the PGR. Several of us tried to get her to tell us how she might do things differently, but we’ve heard nothing from her. As long as he is the co-editor, it’s his game, and I think the same is true if and when he ‘retires’ to the Advisory Board.

    Like

  20. JDRox Avatar
    JDRox

    Bharath: How many graduate programs in philosophy do you think we need? Yes, the PGR makes things harder for programs that are not ranked…but that would be true even if the PGR was infallibly tracking the Platonic ranking of philosophy programs (supposing there were one). I mean: if, fairly often, the best students go to the best schools, and if, fairly often, the best schools provide the best training, then why wouldn’t we want these initially better and better trained students to get most of the jobs–even jobs at unranked programs? Or do you deny that the philosophers produced by, say, Princeton are, fairly often, significantly better than the philosophers produced by [pick an unranked school with a Ph.D. program]? Look, I think there are some very fine unranked programs. But being very fine isn’t necessarily good enough to justify having a grad program, if there are enough even better programs to fill all the jobs. Just so, being very good at philosophy isn’t necessarily good enough to get a job, if there are enough better people to take all the jobs.
    Bill: My argument makes some perhaps false simplifying assumptions, but I don’t think there is any reason to think those assumptions are needed for anything more than simplicity. Do you think there is such a reason?

    Like

  21. Bharath Vallabha Avatar

    JDRox, I am not sure how many philosophy graduate programs there should be. But I think this much is sure: speaking purely logistically, the ranked programs need the unranked programs to place their graduate students, and not just at the “fine” unranked programs, but at many of the other unranked ones as well. As Mitchell and others have been saying, PGR is deeply unfair to the unranked programs. But it is a big problem for the ranked programs as well. The way the profession is set up right now, it is a hierarchy: many of the graduates at ranked programs get jobs at what are explicitly marked as unranked programs, and many of the graduates the of unranked programs get jobs at what are implicitly seen as lesser programs, etc. PGR reaffirms this hierarchy, with the implication that the profession needs to focus its resources on its best programs. But this is a temporary measure at best. As the lower unranked programs are affected, through budget cuts and/or online teaching, the whole system is going to be affected anyway.
    I think a few years from now people are going to look back and say, “we wasted so much time with PGR and protecting just one ecosystem in the profession, instead of thinking about the profession as a whole.” Mitchell makes a great point when he says: “the silence from the Advisory Board members is deafening.” This silence is just a way of looking away from the facts and hoping somehow magically it will all work out.

    Like

  22. JDRox Avatar
    JDRox

    Mitchell writes, “Your argument, with its obliviousness to this phenomenon, has made my case for me. It is an instance of the ignorance of non-PGR programs that the PGR fosters, leaving people to make the kinds of assessments that you have made based on bad information and misperceptions.”
    I’m not sure why you think I am oblivious to this phenomenon. I certainly don’t think that GRE scores are tightly tied to one’s aptitude as a philosopher (or one’s potential for future work as a philosopher). And I’m certainly not unaware that there is a whole different SPEP ecosystem, which includes many bright and promising students. How does that affect my argument? Do yo mean to suggest that you think the students at Rutgers, and, say, UC Davis are of the same “quality”, whatever that amounts to? There are some very nice students at UC Davis, and maybe the best of them would be among the best at Rutgers, etc. etc. But the claim that they are, as a whole, equally good just isn’t credible. (Rinse and repeat with Princeton and Baylor, NYU and Nebraska-Lincoln, etc.)) As an aside–I only mention this because you brought it up–I highly doubt that the GRE scores for Davis students are as high as those at Rutgers.
    Bharath: But why is the PGR unfair to unranked programs? If you’re claiming that the PGR has wrongly evaluated them, then I’m all ears: tell me about a bunch of cases where departments have been ranked significantly incorrectly. If there are a bunch of such cases, then I will agree that the PGR should be abolished or radically changed. But if you’re just saying that the PGR makes it hard for less good philosophy programs to get graduate students, which they would be able to get if it wasn’t public known that they were less good, then I just disagree that that is unfair. And if you think that there is no such thing as “overall quality” when it comes to philosophy programs, so that ranking programs is like ranking blades of grass, then that is the reason you should object to the PGR, not that it is being unfair to the blades of grass ranked lowly.
    I’m also not sure why you think it is a problem that the professions is “hierarchical” in your sense: that graduates of PGR ranked programs get jobs at non-PGR ranked programs, etc. What, exactly, is supposed to be the problem with that? Should students at less good universities have to take classes from professors that had less good training when there are plenty of professors with better training who would be happy to teach at such less good universities? Do you mean to suggest that there is something objectionable to Princeton trained philosophers getting jobs at teaching schools? If you do, then I need you to point out what, exactly, that objectionable thing is.

    Like

  23. Mitchell Aboulafia Avatar

    JD. You raise a question to Bharath and then answer it. “Or do you deny that the philosophers produced by, say, Princeton are, fairly often, significantly better than the philosophers produced by [pick an unranked school with a Ph.D. program]? Look, I think there are some very fine unranked programs. But being very fine isn’t necessarily good enough to justify having a grad program, if there are enough even better programs to fill all the jobs.”
    I do believe that this just about sums up a kind of arrogance and ignorance that PGR supporters often show about those outside of the PGR ecology. Your wiggle words, “fairly often” don’t mitigate your basic assumption. Princeton produces significantly better philosophers than, name your unranked school. The response is simple: better in what sense? What standard are you using? I can guarantee you that there are schools out there that are better for many areas of study than Princeton and they aren’t top PGR ones; they aren’t ranked or ranked well. But perhaps we should get rid of them “if there are enough even better programs to fill all the jobs.”
    Your comments are a great argument against what the PGR does, that is, creates a myopic vision of philosophy as it seeks to protect what it takes for granted is a better way of doing things. Maybe we should rename the PGR, the Philosophical Monist Report.

    Like

  24. JDRox Avatar
    JDRox

    Mitchell: So sorry, but I’m going to need you to explain how it is arrogant to think that Princeton graduate students are fairly often better than graduate students at unranked programs like Baylor or Nebraska Lincoln or Buffalo. I’m certainly not a Princeton graduate student! Or do you just mean that it is arrogant of me to make such a judgement? But I have met some graduate students from Princeton and some from unranked programs. And while I have met some very fine philosophers in training in both camps, the graduate students at Princeton are, fairly often, better.
    You then go on to note that “that there are schools out there that are better for many areas of study than Princeton and they aren’t top PGR ones”. Yes, and I completely agree. Not only that, there are various unranked programs that are much better places to study certain areas that Princeton: medieval philosophy, for example. But that can hardly be a critique of the PGR, since the PGR says as much.
    As far as my claim that we have too many Ph.D. programs goes–please tell me what your critique is (although this is a bit of a side issue). I think it is wrong to take advantage of naive and optimistic graduate students by admitting them into programs that have little or no chance of getting them jobs. This happens–I have seen it with my own two eyes. But of course I didn’t say that “we should get rid of” unranked programs. That’s just silly. Maybe if you had been trained at Princeton you’d be better at hermeneutics… 😉 In any case, what I said was that being very fine at training graduate students isn’t a sufficient justification for having a graduate program if the programs that are even better at training graduate students are training enough students to fill all the jobs. You have made it clear that you reject the antecedent, but surely the conditional is above reproach.
    Finally, why shouldn’t people protect what they take to be a better way of doing things? You, yourself, seem to be spending quite a bit of time promoting what you take to be a better way of doing things–indeed, you are spending quite a bit more time than I on this issue. I’m honestly puzzled about what your critique here is supposed to be.

    Like

  25. Mitchell Aboulafia Avatar

    JD, I certainly think that we should protect things that we think are better. If you think the PGR is better, you should by all means protect it, which I believe is what you are doing here. As a matter of fact, I’ve been trying to get the Advisory Board and evaluators who feel as you do to argue in favor of the PGR. This was the point of my very first post on the topic, “Rank and Yank,” on UP@NIGHT.
    If you are honestly puzzled by what my critique could be after all that I have written and after reading all of the comments here, then there is not much more I can say. I have been very clear about the various aspects of my critique.
    My previous response to you still holds. This is clear from your double-down. You say this time, “In any case, what I said was that being very fine at training graduate students isn’t a sufficient justification for having a graduate program if the programs that are even better at training graduate students are training enough students to fill all the jobs.” To make this statement, given your other remarks, you must assume that Princeton, and places like it, are at the end of the day doing a better job at training graduate students—in some overall or cosmic sense—and therefore their students should get the available jobs. But this is exactly what I and others dispute. If other places are actually better at training students in certain areas, which you admit, then I don’t see your point. Princeton (and places like it) will be better in certain ways, but not in all ways. If so, their students should get some of the jobs, but not all of them. If your concern is that there are too many programs, so students are being ripped off, then the answer is not the PGR. It is data. For example, as I mentioned last time, there are a lot of false assumptions about the quality of graduate students at different places, and I supplied information about GRE scores. Besides your anecdotal meetings with students, I suggest we get some hard data, especially about job placement, before we make assumptions about what programs are sustainable. We should also gather information on the kinds of places students want to work. Not all of them crave Research I institutions. An insidious aspect of the PGR is that it undermines diversity in the profession. But in addition it leads students to believe that their goals should be similar: to land a job at one of these “top places.”
    Oh, and I don’t think I would have done better on the hermeneutics front at Princeton. For example, Gadamer was on the faculty of Boston College when I was there.

    Like

  26. JDRox Avatar
    JDRox

    Mitchell! The claim about hermeneutics and Princeton was supposed to be a joke, of course. Princeton is not where I would recommend someone study hermeneutics. (That’s the joke.) Just trying to keep things light, sorry about any misunderstanding.
    I understand your critique of the PGR, but I’m asking if you have an argument against the claim that there are too many Ph.D. programs. That is consistent with the PGR being garbage and there being loads of non-ranked programs that are awesome.
    Finally, I’m struggling to see how or where, exactly, I’m failing to communicate my point. My position, as you know, is that the PGR is doing a decent job of tracking quality, and that there is nothing unfair about better trained graduate students getting jobs–of any kind–over less well trained graduate students. If the PGR is accurate, the best trained medieval philosophers are not trained at Princeton. (Happily, Princeton is not training many medieval philosophers.) If the sort of reasoning I outlined above is correct, the best prospective medieval philosophers will, fairly often, go the the top PGR ranked schools for medieval philosophy. Many of the jobs in medieval will be taken by these most promising prospective students who have gone on to train at the best programs in medieval philosophy, or by most promising students who have trained at good, but not the best programs, or somewhat promising students who have trained at the best programs, etc. I think that is all for the good, even though that means that there won’t be many jobs for not especially promising students who have trained at not especially good programs.
    By the way, I’m not suggesting that all students should aim to get R1 jobs. I don’t think the PGR encourages that, and it is obviously impossible for even just the students from the top 20 Leiter schools to all get jobs at top 50 Ph.D. programs.
    Why do you think the PGR undermines diversity in the profession, and what kind of diversity are you talking about? A pretty broad range of approaches are used at PGR ranked schools, from X-phi to conceptual analysis to the history of philosophy to feminist philosophy to deconstruction to a priori metaphysics to extreme empiricism to phenomenology to empirically informed but not empiricist work in the philosophy of mind and language to…etc.

    Like

  27. Mitchell Aboulafia Avatar

    JD, Hi. It wasn’t clear from your other comments that you were joking. Sorry if I misunderstood. I do believe we have a basic disagreement. I cannot go along with your claim that the PGR does a decent job of tracking quality. How would you even know this? What standard or criteria are you checking the PGR against? It is a reputational survey, with a pool of evaluators that shares too many assumptions about the way philosophy should be done, making them somewhat blind to different traditions. Almost half of the evaluators went to a handful of the same graduate programs. Further, the methodology is so flawed that evaluators can actually be comparing apples to oranges. People with sophisticated backgrounds in surveying have come to the conclusion that the PGR is deeply flawed. (Please see my post for some links.)
    The PGR doesn’t address diversity because it doesn’t pay sufficient attention to people who are specialists in many fields. They are written off because they are not seen as worthy of invitations to be evaluators. Also, in certain areas, there are very few people evaluating. I would be more than happy to have an impartial study done of experts in the various areas you list to see if they think that the PGR does justice to their specializations. (Note my remarks about Chinese Philosophy in my post.) In addition, it has been pointed out that women are not fairly represented in the pool of evaluators.
    The question of whether there are too many graduate programs is an important one. My point was that we don’t know if this is so, or in what why it is so, until the question is studied. We need to see actual data, for example, placement records. And then we would need criteria for deciding what programs are unworthy of continuing. In any case, I certainly wouldn’t want this question tied in any way to the PGR, and from your remarks, I believe that this is true for you also
    I would be happy to carry this conversation on with you in private, but I am becoming a bit uncomfortable because I feel that we are repeating points that have already been made numerous times in this thread. Please feel free to write to me and we can carry on. Really!

    Like

Leave a reply to Bharath Vallabha Cancel reply