I have read in several places this description of my placement post and my response to Brian Leiter's criticisms of that post (most recently, in comments posted yesterday at Philosophical Comment): 

"July 1:  I posted a sharp critique of some utterly misleading rankings produced by Carolyn Jennings, a  tenure-stream faculty member at UC Merced.  She quickly started revising it after I called her out."

For the record, this does not strike me as an accurate representation of those events. 

First, while I did post a ranking, I made it clear that I did this as an exercise: (from the original post, bold original) "As discussed here in the comments, one of the advantages of comparative data on placement is that they help fill in gaps left over by the PGR…To illustrate this, I below rank the top 50 departments by tenure-track placement rate**, providing for comparison these department's ranks from the 2011 "Ranking Of Top 50 Faculties In The English-Speaking World" by the Philosophical Gourmet ReportPlease note that this placement ranking is provided only to demonstrate the potential utility of these data."

Second, while Brian Leiter did find the rankings misleading, many others did not, and even commended the clarity of language in my post. Take these quotes from David Marshall Miller, who has also worked on placement data: "Andrew Carson and, especially, Carolyn Dicey Jennings have developed analyses that now strike me as very robust." and "I will say, to again quote Leiter, that “all such exercises are of very limited value.” Nevertheless, they are of some use, and should be made available, so long as the methodology and limitations of the analysis are made clear. I think the PGR and the placement rankings by Jennings, Carson, and myself all meet this standard." 

Third, Brian did post criticisms of the ranking, but I did not make any substantial revisions to the ranking based on his criticisms, since I did not find those criticisms to have merit. Brian's way of characterizing my response at the time was "Prof. Jennings digs in her heels."

I did later make one relatively substantial change to the post, which was to move from a placement ranking to placement brackets. This idea emerged from discussion with others, as I note at the post itself, and was in no way inspired by Brian's criticisms. 

The only change that I did make, as a result of Brian's criticisms, was to look at the NYU placement page to discover whether my method had failed to capture all of NYU's tenure-track placements. It had. In point of fact, I predicted that the method would fail to capture some placements. As a reminder, the original post described "placement rate" this way, "The average yearly tenure-track placements reported at ProPhilosophy and PhilAppointments between 2011 and 2014 divided by the average yearly graduates between 2009 and 2013 as reported in the 2013 APA Guide to Graduate Programs or by email" and included the caveat "Since the data set is not yet complete, I do not recommend viewing these as an authoritative ranking" (from the original post, bold original). So long as the method did not particularly disadvantage NYU, this shouldn't have been a problem, as I explain in my response to Brian's criticisms. Nonetheless, I added in those two placements. As I suspected, as other departments sent in their own missed placements, NYU dropped back down to a similar position in the ranking, and I still haven't checked the placement pages for most departments. (NYU was found with these methods to have a placement rate that originally put it at rank #24, but which now puts it at rank #20–although, again, this is in no way supposed to be authoritative, as I have stressed from the very first version of this post, but an exercise to demonstrate the ultility of comparative placement data. I am still exploring these issues and am not at all sure whether rank is the best way to capture placement data. I used rank only because it enabled me to make a comparison with the PGR, which is also true of an earlier iteration of my placement analysis). 

If Brian's statement fairly represented these facts, it would read more like the following: "July 1: I posted a sharp critique of a post by Carolyn Jennings, a tenure-stream faculty member at UC Merced, that I found utterly misleading."

Posted in ,

3 responses to “For the Record”

  1. Guest Avatar
    Guest

    “Nevertheless, they are of some use, and should be made available, so long as the methodology and limitations of the analysis are made clear.”
    I agree. I can think of no reason why we cannot have a plurality of metrics, just like I see now reason why we cannot have a plurality of approaches to analysis in philosophy.
    The first thing I was taught in statistical modelling was ‘specify the question’. I don’t think ‘What is the best department in philosophy for graduate education?’ is specific enough to bring statistical methods to bear on it.
    That is why I do not oppose the PGR and why I think these efforts to make other tools available are a good idea – especially making the methodology and limitations clear. If this all tends to make potential graduate students more comfortable with collecting or interpreting data then so much the better.
    And of course a collection of rankings, however sophisticated and numerous, is like a box of screwdrivers – there remain many different types of tool out there.

    Like

  2. anon Avatar
    anon

    I thought before and I still think that your analyses of PGR is extremely valuable to the discipline, Carolyn. Thankyou.
    Leiter’s responses to them are of course absurd.

    Like

  3. William Blattner Avatar

    I also agree that this data is worth collecting, and Prof. Jennings’s openness to suggestions, revisions, etc. is an excellent model of how to conduct oneself in public discussions of the profession.
    I sent a version of this comment to Prof. Jennings privately, but I also think it’s worth stating publicly. For the data to become more accurate, it must move from estimating the number of folks earning PhD’s to a real head-count. In smaller programs a difference between, say, three people looking for jobs and five can make a huge difference to the results. Also, there are programs, such as ours here at Georgetown, that do not produce PhD’s who are exclusively aiming at a job teaching in academe. So, the head-count should be of those entering the job market in academic philosophy. I am sensitive to this because not only is our program a little smaller, but also historically a significant proportion of our PhD’s have had other career goals. (E.g., MD-PhD’s who aim to work in a hospital; JD-PhD’s who want to teach in a law school or work in public advocacy or think tanks. You get the picture.)
    Overall, however, thank you, Prof. Jennings for your selfless work!

    Like

Leave a comment