As discussed here in the comments, one of the advantages of comparative data on placement is that they help fill in gaps left over by the PGR. That is, the PGR aims to measure the collective reputation of a department’s faculty, but faculty reputation does not necessarily predict the likelihood of placement by that department, perhaps because it does not necessarily predict the overall quality of education in that department nor the quality of preparation for the job market by that department. Comparative data on placement has the potential to provide insight on these factors. To illustrate this, I below bracket the top 50 departments by tenure-track placement rate** (Note: I removed three universities from the top 50 that reported fewer than 2 graduates per year, since small numbers may yield misleading placement rates), providing for comparison these department’s ranks from the 2011 “Ranking Of Top 50 Faculties In The English-Speaking World” by the Philosophical Gourmet Report. Please note that placement brackets are provided only to demonstrate the potential utility of these data. Since the data set is not yet complete, I do not recommend viewing these as authoritative brackets. Update: Please see this post for an idea of how I envision this project developing. I have released the spreadsheet containing the raw data and methods I have been using to compute these results, and welcome any/all corrections. As a reminder, I do not have data on the yearly graduates from many departments, listed below. (Those departments are welcome to send me their data, if available.)

Update 7/1/2014: It has come to my attention that Brian Leiter has aired some criticisms of this post on his blog and has publicly suggested that it (this post, not his blog) be taken down. I respond to these criticisms below. 

Updates 7/2/2014:

  1. I changed some wording above from “ranking” to “brackets” and added a link to the spreadsheet. I have also changed the numbers in the below ranking to a grouping by bracket (where departments are listed in alphabetical order within brackets). This was a suggestion of Ned Block’s. We have been corresponding on statistical significance and I decided that his suggestion would help avoid making small differences between placement rates appear more important than they are. I have left in the PGR rank for comparison, although the difference in rank has been omitted for the reasons provided above. 
  2. I have also added updates to my responses to Brian, based on some new statistical tests. 
  3. I am adding a link to a chart that will help readers to visualize the total number of reported tenure-track placements and estimated graduates from each department, rather than just percentage of tenure-track placements. 

Update 7/6/2014: I ran a completeness test for 5 departments selected at random using a random number generator. The tenure-track numbers for these 5 departments appears to be accurate. More below.

Departments for which I have data on yearly graduates, with those that are not in the top 50 of the worldwide PGR bolded and those with incomplete graduation data starred (*), listed by tenure-track placement rate/rank and group:

Tenure-Track Placement Rate Department 2011 PGR Rank (Worldwide)



≥90% Princeton University 4
University of California, Berkeley 12
University of California, Los Angeles 15
University of Pittsburgh (HPS) 6



≥80% Columbia University 12
Harvard University 6
Rutgers University 3
University of Tennessee  
Yale University* 8



≥70% Massachusetts Institute of Technology 8
Stanford University 10
University of North Carolina, Chapel Hill 10
University of Pennsylvania 34



≥60% Baylor University  
Fordham University  
New York University 1
Northwestern University 38
Saint Louis University  
University of California, Irvine (LPS) 34
University of Chicago* 24
University of Michigan, Ann Arbor 5
University of Pittsburgh 6
University of Wisconsin, Madison 27
Vanderbilt University  



≥50% Australian National University 15
Boston University  
University of California, San Diego 27
University of Connecticut, Storrs  
University of Notre Dame 21
University of Oregon  
University of Texas, Austin 24



≥40% Cornell University 15
Duke University 29
Indiana University, Bloomington 29
Johns Hopkins University 49
Syracuse University 49
University of Southern California 12
University of Toronto 15
University of Virginia 49
University of Washington  



≥30% Brown University 22
Carnegie Mellon University  
Georgetown University 45
Rice University  
University of California, Davis  
University of California, Riverside 38
University of Colorado, Boulder 29
University of Iowa  
University of Maryland 38
University of Sheffield 45



Here is the link to the chart mentioned above (last updated 7/2014). The blue bars represent the number of candidates with reported tenure-track placements between 2011 and 2013, whereas the red bars represent the estimated total number of graduates in those years, listed in order of tenure-track placement rate. 

Departments for which I do not have data on yearly graduates, with those in the top 50 of the PGR bolded, listed by 1) tenure-track placements, 2) total placed candidates, 3) alphabetical order: University of Oxford; University of St Andrews/Stirling; University of Cambridge; University of Western Ontario; University of Chicago (CHSS); University of Kentucky; University of Edinburgh; University of Sydney; University of Warwick; York University; King’s College, London; London School of Economics; Marquette University; State University of New York, Buffalo; University of Adelaide; University of Auckland; University of Leeds; University of Melbourne; University of New Mexico; Victoria University of Wellington; Birkbeck, University of London; Durham University; McGill University; Trinity College, Dublin; University College London; University of Tasmania; Wayne State University.

**The average yearly tenure-track placements reported at ProPhilosophy and PhilAppointments between 2011 and 2014 divided by the average yearly graduates between 2009 and 2013 as reported in the 2013 APA Guide to Graduate Programs or by email–see this post for details.


Responses to the criticisms contained in Brian Leiter’s post,  found here:

There are many things I would object to in Brian’s post, especially phrasing that calls into question my intelligence (“I would think philosophers are smart enough…”), but I want to focus on those points I take to be the most constructive:

1) I agree that the information is incomplete, but I think it is ok to air it in the fashion that I do for several reasons: a) the departments listed, at least in the United States, are likely to be in the same position with respect to missing data, b) the departments listed have reason to make their tenure-track placements public and I have used the most complete databases containing this information for the years in question, updating that information when called upon to do so, c) this exercise brings to light several departments that have been overlooked by the PGR’s ranking system, which I take to be valuable information to potential graduates, and d) my bolded caveat makes it clear that this is an exercise with the aim of making public the value of this sort of information, rather than a claim to an authoritative ranking. I have put out several calls for more data, including two posts at the beginning of June (here is the complete list of my posts). In 2011-2012 and 2012-2013 ProPhilosophy emailed departments directly to obtain complete placement information. If anyone knows of placements that have not been reported in the listed venues, please do send that information on to me. 

Update (7/2/14): I have been getting new placement data by email and have started by updating the rankings. I will format the additions as the next step. Thus, if you notice some unformatted text at the bottom of the “Updated Data” tab, that part is a work in progress. 

Update (7/6/2014): I ran a completeness test for 5 departments. I first generated 5 random numbers between 2 and 92 at http://www.random.org, corresponding to the row numbers in the far-right table of  “Department Trends” tab. I then used the placement pages for the corresponding departments to check my numbers. 

a) “56,” Washington University, St Louis. Missing from Excel spreadsheet: 3 postdocs (2012, 2013, 2014) and 1 renewable instructor (2013). TT placements from Excel: 4. Graduation years for tenure-track placements: 2010, 2011, 2011, 2012. Result: no change in the number of TT positions. 

b) “34,” Syracuse University. Missing from Excel spreadsheet: 2 postdocs (2012, 2013), 1 VAP (2012), and 1 lecturer (2012). (Note: Airforce Academy appears to be 100% adjunct.) TT placements from Excel: 5. Graduation years for tenure-track placements: 2005, 2007, 2008, 2013, 2013. Result: no change in the number of TT positions. 

c) “11,” Massachusetts Institute of Technology. Missing from Excel spreadsheet: 1 postdoc (2013). TT placements from Excel: 13. Graduation years for tenure-track placements: 2009, 2010, 2010, 2011, 2011, 2011, 2011, 2012, 2012, 2012,  2012, 2012, 2014. Result: no change in the number of TT positions. 

d) “77,” University of Rochester. Missing from Excel spreadsheet: None/Unclear. (Note: Hobart and William Smith and Malone College placements appear to have occurred in 2005, according to university websites.) TT placements from Excel: 1. Graduation year for tenure-track placement: 2012. Result: no change in the number of TT positions. 

e) “82,” State University of New York, Binghamton. Missing from Excel spreadsheet: 1 TT placement (2012). TT placements from Excel: 1. Graduation years for tenure-track placements: 2012, 2012. Note: Excel TT placement from PIC program, so TT placement number stays the same. Result: no change in the number of TT positions. 

Thus, it seems reasonable to suppose that the data I have is fairly complete with respect to tenure-track data, but fairly incomplete with respect to temporary positions, such as postdoctoral, VAP, and instructor positions. (I will add the missing data yielded by this exercise soon.)

2) Choosing the relevant PGR ranking: I chose the most recent PGR ranking because of my assumption that faculty reputation makes more of an impact on how one is perceived on the job market than it is a measure of the quality of one’s education. One could look at the mean PGR ranking over the course of each graduate’s education, but I suspect that the impact of this value on placement rate is far outweighed by the PGR ranking/faculty reputation at the time of that graduate’s placement. I have heard anecdotal stories of this sort, in any case. I am happy to hear more argument on this point and could possibly run an analysis that would test the point.

Update (7/2/14): on the suggestion of Shen-yi Liao I looked at the correlation between the tenure-track placement rates for these departments and both the 2006 and 2011 PGR mean ratings for these departments. The correlation coefficients for both years are 0.7. I thus see no reason to use the 2006 numbers here, since the 2011 numbers appear to be equally correlative.

3) I treat tenure-track placements equally here because I think this is the most useful information to potential graduate students. I presume that most potential graduate students want a tenure-track equivalent job but do not yet know what type of job. I know for a fact that some graduates seek positions with low research expectations and high teaching loads, whereas others seek positions with high research expectations and low teaching loads. I also know that location is more important to many graduates than the type of institution and/or department. Finally, I know that climate matters a lot to many graduates seeking placement. Perhaps in time we can look at more fine-grained measures that will be useful to those potential graduate students who have a better idea of what they want. 

4) I am having a difficult time reading much of this point charitably in its literal form. Obviously, it would be very surprising if the numbers that I have collected on behalf of these departments are “equivalent, in most cases” to randomly chosen numbers. I compare the average yearly graduates to the average tenure-track placements to arrive at placement rate. For the most part, more years of data will yield a more accurate placement rate, so long as one is as consistent as possible across departments. In my case, I was able to obtain 5 years of graduation data for most departments through the APA Graduate Guide for 2009 to 2013 and had 3 years of placement data for many departments through the collection efforts of ProPhilosophy, who emailed departments directly, and PhilAppointments from 2011 to 2014. Since I compared average to average and the years overlap for the most part (that is, the graduates earning placement are mostly graduates between the years 2009 and 2013), I don’t find this issue very problematic. I am open to more discussion on this, of course. 

Update (7/2/14): on the suggestion of Ned Block, I looked at the statistical significance of the difference in placement rate between 3 departments. Brian may have been speaking to a supposed absence of statistical significance when he claimed to see my numbers as equivalent to random numbers. Since Ned is at NYU and since NYU is put forward as a counterexample by Brian, I looked at the significance of the difference between NYU and both UC Berkeley and UC Riverside (since these are the only two departments with the same average yearly graduates to NYU). Using the chi-squared test I found the difference between all three of these to be highly significant (p=.003), and so this difference is not likely due to randomness. Further, I looked at only UC Berkeley and NYU and found that difference to be statistically significant (p=.03). I hope that this puts the worry about randomness to rest. 

Update 7/6/14: from the exercise listed under 1, the average graduation year for candidates placed in tenure-track jobs between 2011 and 2014 from these 5 departments is 2011 and only 4 of the 25 placed candidates (16%) obtained their PhD outside the timeframe of the graduation data that I used (2009-2013). This is further reason to suppose that the graduation data I used captures most of the placed candidates (84% of this set).  

I am happy to further respond to constructive criticisms on this report. I would ask that these criticisms focus on the work, rather than on my person. 

Posted in ,

38 responses to “Job Placement 2011-2014: Comparing Placement Rank to PGR Rank (Updated 9/17/2014)”

  1. Jonathan Weinberg Avatar
    Jonathan Weinberg

    I can understand if you’re trying not to ruffle feathers, but it might be worthwhile extending that table for all Leiter-rated programs. I’d be curious to see where my own home department of Arizona would land, in particular.

    Like

  2. David Silverman Avatar
    David Silverman

    Thank you for putting together this data, just to point out that the St Andrews graduate programme is run jointly with the University of Stirling, so it should read St Andrews/Stirling.

    Like

  3. Carolyn Dicey Jennings Avatar

    Here is the Arizona data that I have: 4 TT placements, 8 postdoctoral/VAP/instructor placements, 1 duplicate candidate, and 5.8 yearly graduates for an overall placement rate of 63% and a tenure-track placement rate of 23%. Feel free to send me any placement data you think may be missing. With this data, it would be ranked 53.

    Like

  4. Carolyn Dicey Jennings Avatar

    Good point. I changed it.

    Like

  5. Jonathan Weinberg Avatar
    Jonathan Weinberg

    Thanks Carolyn! I’m not sure if I’m tallying the same way you are, but I’m seeing something more like 8 or so TT placements from 2011-present? Maybe some of these just did not get reported on those sites?
    http://philosophy.arizona.edu/node/560

    Like

  6. Carolyn Dicey Jennings Avatar

    Some of the 2011 graduates were placed in the 2010-2011 academic year, rather than the 2011-2012 academic year (Nathan Ballantyne, Robert Wagoner, and Helen Daly). Kevin Vallier and Jacob Caton seem to have been missed and I will add them. This puts Arizona in rank 46. I will update accordingly. (I updated this comment and the above to reflect new information.)

    Like

  7. anon Avatar
    anon

    I want to note that placement records—and thus data like these that are gleaned from placement records—can be misleading insofar as they show TT placement per graduates, rather than per matriculating students. A school might have a stellar placement record because 60% of its students leave before completing a PhD, and the other 40% do very well. That shouldn’t be comforting to students, especially to prospective students.
    So it’s important to look at annual matriculation rates to the program vs. annual TT placement rates, in addition to these data, which show TT placement ranks among those who actually get the PhD.
    Just something to note. Thanks for the data collection and analysis!

    Like

  8. Carolyn Dicey Jennings Avatar

    Good point. It would also be good to know how many hope to be placed in an academic position. Some of the graduates probably leave academia prior to seeking academic employment, and I am sure the same is true of those who decide not to complete the program.

    Like

  9. Anon II Avatar
    Anon II

    It looks very much like a few of the departments on this list ought to be ashamed of their placement performance. I am starting a PhD at one of the terribly under-performing schools on this list, and I would greatly welcome discussion about the sorts of positive steps that can be taken to improve placement records. What are the characteristics of a school that over-performs in this respect (bearing in mind that we are controlling for faculty prestige)?

    Like

  10. Derek Bowman Avatar

    It’s the profession as a whole that ought to be ashamed.
    It’s a great idea for individual departments to use these numbers as an occasion to ask how well they’re preparing their graduate students. But the problem these numbers put in stark relief is not a training problem – it’s a problem of too few tenure-track jobs.
    Is there any way for underperforming departments to learn the lessons of what ‘well-performing’ (= 50%+ placement – a pretty low threshold) departments are doing without simply increasing the competitiveness of the job market? And is there any reason to think that increased competition on ‘job market preparedness,’ without an increase in the number of professional-wage jobs, will be better for graduate education?

    Like

  11. Anonymous Recent Grad Avatar
    Anonymous Recent Grad

    I know that at least one of the “overperforming” departments gives graduate students the opportunity to teach a wide variety of courses, both at the introductory and upper level, independently. Thus as newly minted PhD’s their graduates have teaching records that rival many folks from higher PGR ranked departments who have been rotating through VAPs for many years. This certainly can’t hurt on the market. I’d be interested to hear if this is the case for other departments with a large (positive) difference between PGR ranking and placement. Especially since increasing teaching mentoring, opportunities for independent teaching, and course development might be something concrete that “underperforming” departments can do proactively to make their graduates more marketable fresh out. On the flip side I’ve been shocked on numerous occasions to hear how little independent teaching friends from more highly ranked programs have been able to do (and even to hear that they’ve been strongly discouraged from doing more).

    Like

  12. Anonymous commenter Avatar
    Anonymous commenter

    Doesn’t the PGR effectively rank Pitt and Pitt HPS as one program, rather than leaving HPS off entirely?

    Like

  13. anon grad Avatar
    anon grad

    Thanks for putting this together.

    Like

  14. Derek Bowman Avatar

    That’s an interesting hypothesis and worth following up on. But I would caution grad students (and those advising them) not to jump too quickly to the supposition that more or better teaching experience is the key to job market success. Based on my experience with one of the ‘underperforming’ departments, it was publications, not teaching experience that marked the difference between ‘few interviews’ and ‘multiple TT job offers.’ On the other hand, I can report that a strong teaching record is considered very impressive when applying for part-time adjunct positions.
    Also, keep in mind two further points:
    1. These numbers include many people who already had VAPs, post-docs, or instructor positions, so it’s not clear that even for ‘successful’ departments that TT placement is bypassing the VAP stage. (For example based on a brief search of the job postings, it looks like more than half of the top performing department’s TT-jobs were secured by those with at least one prior position).
    2. Departments that can offer their grad students a variety of teaching experiences are departments that have lots of teaching needs, and so it’s often easier for them to offer continued instructor-of-record-TA positions to students who are not ready for, or unsuccessful on, the job market. In that case it might be security (in income and professional status), rather than teaching experience which explains the difference.
    But whatever the explanation, thanks again to Carolyn Dicey Jennings for compiling the data that’s allowing us to begin this conversation.
    For current grad students, especially those at ‘underperforming’ departments, the best place to start is by talking to both the successful and unsuccessful candidates who have graduated from your department. What do they think made the difference? And, for the unsuccessful candidates, what stage of the process did they strike out at (with possibilities ranging from ‘lots of flyouts, no job offers’ to ‘didn’t bother applying’)?

    Like

  15. Carolyn Dicey Jennings Avatar

    Yes, good point. I did not realize this, but will change the above.

    Like

  16. Anon II Avatar
    Anon II

    Derek, that’s a good point. However, we can all agree that the job market is bad, and we knew that before this comparative data arrived. But there is a distinct issue that this data highlights: even a prospective grad student who goes into a PhD with a full understanding of how bad the market is might still be hugely misled about their chances for placement.
    Anonymous Recent Grad, I’ve heard that many of those top schools don’t want grads teaching undergrads, as they believe that this will hurt their prestige. This is an interesting fact that prospective grads might want to start asking about when they are considering offers.

    Like

  17. Carolyn Dicey Jennings Avatar

    I updated the NYU numbers based on their placement page, due to a suggestion that I might have missed some placements. I did find two new placements that had not yet been reported, including one in-house hire. Including those, they have a 64% TT placement rate, which puts them in between Wisconsin’s 65% and SLU’s 63%. If I did not include the in-house hire, their placement would fall just below Vanderbilt and just above ANU at 58% (with UPenn). I welcome comments on this point. There have been very few in-house placements and I have decided not to track them this year. But if there is a good argument for including/excluding these placements from the placement rate, I am interested in hearing them. I am happy to update other departments for which I get updated numbers. As I said, I will release the Excel spreadsheet very soon.

    Like

  18. Anonymous Avatar
    Anonymous

    I just graduated from one of the ‘overperforming’ schools and was fortunate enough to land a TT position at a SLAC for next year. Most (though not all) of our graduates end up at small 4 year colleges, and I think there are two things about the program that make our grads attractive to these kinds of schools:
    1. Contra Derek, I think teaching experience does matter a lot (a lot more than research) to schools like this. However, I don’t doubt that Derek’s comments do accurately reflect of the attitudes of more prestigious schools, and even at less prestigious ones its still important to have published something respectable.
    2. Our program, like many other ‘overperforming’ ones on this list, places a big emphasis on pluralism. (I mean genuine pluralism — not 15 Lacanians and a token analytic epistemologist). Students are required to take several courses in the history of philosophy and in both the continental and analytic traditions, and are required to teach a number of historical figures. So our students generally come out prepared to teach a wide variety of courses. This is very attractive to hiring committees at small departments, where they often only have 2-3 faculty to fill the whole course catalog. If the school needs someone to teach epistemology, medieval, and existentialism next year, and you’re the only applicant qualified to do all those, you’ve got a great shot at the job.

    Like

  19. anon Avatar
    anon

    First off, thanks again to Dr. Dicey-Jennings for the great work collecting this data and getting the ball rolling. It is helpful now, and it will only get better as more schools send her their information.
    Second, I would encourage CDJ not to be goaded by the condescension and lack of respect from some on the web. It is, to say the least, uncharitable to suggest that CDJ fails to understand the distinction between forward-looking and backward-looking measures. What her post helps to undercut is the idea (widespread among graduate students, believe me) that when schools are looking to hire they cast an eye to who is currently more highly ranked on the PGR and select their interview lists accordingly. CDJ’s data shows that for many schools this is simply not true.

    Like

  20. Derek Bowman Avatar

    I suspect we may just be talking past one another.
    You say that, “teaching does matter a lot (a lot more than research) to schools like this,” but then you go on to concede that “even at less prestigious ones its still important to have published something respectable.”
    So, when choosing among those candidates who have published something respectable, teaching matters more than research for small 4 year colleges. But candidates who haven’t published something respectable are unlikely to make it to the stage at which teaching matters.
    It sounds like that hypothesis is consistent with both your experience and mine.

    Like

  21. philosopher Avatar
    philosopher

    I think people should be cautious in reading too much into the data (and chart). Earlier, NYU was ranked 26. After just TWO more hires were added, NYU jumped to 14. So the ranking is not a very subtle instrument. Any number of schools may have two or more unaccounted for hires that would change their ranking substantially.

    Like

  22. Anonymous Avatar
    Anonymous

    I agree with that. Candidates–especially ones from non-PGR schools–typically need something published to even make the initial cut, for the obvious reason that the hiring school wants to be confident you will be able to get tenure. After that, fit and teaching are often the most important factors for smaller schools.

    Like

  23. Pavlos Avatar
    Pavlos

    This seems to me potentially misleading. Does it track how many of the graduating students actually went on the philosophy-in-academy job market? In my own year, several people who finished did not go at all, but went on to jobs outside of academy or to law schools. Given that a graduating class is often, say, 6 people, one or two such persons makes a huge difference. Does it track the ‘quality’ of placement and specialty fields? Some schools higher up do indeed place people, but to a lot of non-research places (often these are higher ranking PGR departments which however do not have any 1-2 tier strengths but are overall very good, whereas other more down on the list, seem to place people in research positions in specific areas of research in which they excel. These are all important issues and I am afraid that this kind of chart, once there, can seriously mislead students. But I might be wrong, of course.

    Like

  24. Carolyn Dicey Jennings Avatar

    This is a fair point, although 2 tenure-track hires is no small thing in this market. Moreover, it may be that if I added the one or two missing TT placements for all of the above departments then NYU would drop back down.

    Like

  25. Carolyn Dicey Jennings Avatar

    No, the above information doesn’t capture any of these variables. Perhaps you could suggest phrasing that would make it clearer that the information does not capture these variables? In any case, I think that I agree with you that the best case scenario would be one in which one knew all of the graduates, all of those seeking academic employment, and all of those placed for each graduating class. I might be able to do something like this next year, depending on how much support I have.

    Like

  26. Shell Avatar
    Shell

    “her measure of placement success takes no account of the kinds of jobs graduates secure. 2/2 is the same as 4/4, research university is the same as a liberal arts college, a PhD-granting department is the same as a community college”
    Leiter – not a member of a philosophy department, and actively barred by Chicago’s philosophy department from being cross-appointed – is out of touch with how things are among philosophy job seekers. People are just trying to get on the ladder somewhere, and will grudgingly take a 4/4 with the hopes of trading up. A TT job is a huge success in this market, and multi-year non TT appointments are also highly prized. A survey of current grad students will reveal this very plain truth, but anyone in one of these departments already knows this.

    Like

  27. Chicago student Avatar
    Chicago student

    FYI, Leiter is on multiple dissertation committees of philosophy PhD students at Chicago, and directs some of them. He is not “actively barred by Chicago’s philosophy department” (whatever that means) from being appointed. He never asked to be appointed. Last year, the graduate students asked the Chair and Director of Graduate Studies about appointing him, but they said he would have to request consideration, and he declined (my understanding is that because there are no courtesy appointments, he does not want to have the obligation of attending faculty meetings and other admin work). There is some kind of “bad blood” between him and Robert Pippin going back to some reviews he wrote of Pippin’s work about 15 years ago, but I don’t know what role that plays in any of this. In short, this irrelevant personal attack isn’t even factually correct.

    Like

  28. Carolyn Dicey Jennings Avatar

    I had considered editing that portion of the above comment, but you beat me to it by responding to it. Since you did not provide a working email address, I will leave the exchange as is rather than delete both comments. Do let me know if you think it more appropriate to remove both (that is, that portion of the above comment and your full comment).

    Like

  29. BLS Nelson Avatar

    Thank you, Carolyn (if I may), for all this fine work. For what it is worth, as a graduate student who will eventually be seeking tenure-track employment, I find that the prior assumptions that you make in making sense of the data resonate far more with me than the assumptions made by PGR. Given that the job market is post-apocalyptic, it seems unreasonable to insist very strongly upon a ranking that makes fine discriminations between relative teaching loads.
    But that’s my subjective interpretation of the situation, based on my views of what counts as a useful metric. It should go without saying that all placement-seekers have different values, and hence different definitions of success. It is up to us individually to decide what counts as a useful set of criteria for establishing a ranking system.
    e.g., I take it that I am relatively unusual in the sense that I would prefer to work with colleagues that are (a) recognized as productive researchers, and (b) who have an intellectual will of their own. The former means I am probably more interested in impact factors than some other folks in the profession, and the latter means I am probably more interested in the impact of both articles in prestige journals and the impact of monographs (under the assumption that book-length works give the author the opportunity to follow one’s own train of reasoning wherever it leads).
    To the best of my knowledge, no ranking system exists to accommodate the things I care about in the parts of the profession I respect. In the past I have had to devise my own system based on the criteria I care about. But that’s already put me in a bad way, given that my relatively sparse training in sociological methods means it is all too possible that I have bungled the job.
    Which is all just to say that I am highly invested in, and interested in, alternative methods of assessment devised by those who have the right training, as (if I’m not mistaken) you do. Please continue, as I think we are better for it.

    Like

  30. Shen-yi Liao Avatar

    I wonder if it would be more useful to think about this in terms of cardinal rankings rather than ordinal rankings. For the placement data, it seems that the rate information is more informative than the rank. Similarly for the PGR data, it seems that the general reputation metric is more informative than the rank.
    Then it seems that Leiter’s hypothesis can be tested empirically. What correlates better with the current placement rates: most current PGR reputation metric, or PGR reputation metric from, say, 6 years ago? Then the two correlation coefficients can be compared using, e.g. http://vassarstats.net/rdiff.html .

    Like

  31. Joe Avatar
    Joe

    It is unfortunate that, YET again, an issue which is of importance for all of us has become a flashpoint for online mud-slinging. CDJ explicitly said in the introduction to the post that the data were incomplete, indeed, she even bolded her disclaimer. Yet, somehow, online drama ensues and AGAIN distracts philosophers away from concrete, practical steps that might be taken to address a serious issue (making corrections to the data, assessing its importance, enabling schools to address shortcomings, etc.). Why are we so bad at this?

    Like

  32. Carolyn Dicey Jennings Avatar

    Good idea. If you do this and find something interesting, let me know and I will post it. Update: I checked into this and the correlation coefficients are the same. Feel free to run your own independent analysis to confirm.

    Like

  33. Carolyn Dicey Jennings Avatar

    I have some training, but I could definitely be better at this. A system that enabled personalized rankings based on multiple factors of interest would be very cool. That might be in our future.

    Like

  34. Shen-yi Liao Avatar

    Thank you, Carolyn, for checking on this so quickly, and of course for your hard work in collecting and analyzing the data. I’m sorry I don’t have the time to perform the analysis right now.
    I find the result very interesting. (Sidebar: do you mean to say that they’re not statistically significantly different, or that they’re literally the same? If the latter, that would be VERY interesting.) It suggests that faculty reputation may not be playing as big a role in placement records as the Philosophical Gourmet sometimes suggests. My own conjecture is that there are institutional factors, such as the name of the school, departmental professionalization training, university-level funding, that play a greater role in placement — so that faculty reputation washes out, more or less.
    Again, thanks for collecting and analyzing the placement information, so that we can begin to think about the factors that go into it using statistical methods.

    Like

  35. Carolyn Dicey Jennings Avatar

    The two values were .69678129 and .70500629. I did not bother running a comparison at the time, but when I did it just now it yielded p=.93.

    Like

  36. Anonymous Avatar
    Anonymous

    I’m a PhD student at a school that is ranked quite well for its placement record above but that has earned virtually no recognition from the PGR. For what it’s worth, I can say that the graduates who have landed TT jobs from this department had publications before going on the job market. Publishing early is highly supported, and faculty are wholly dedicated to providing feedback to students.

    Like

  37. Susan Avatar
    Susan

    Carolyn, thank you so much for providing this data. This is precisely the sort of information I would have wanted to know as a prospective graduate student. As Anonymous at 36 illustrates, it’s very important to know whether faculty are dedicated and supportive, and measuring placement rate is one partial way to get at this information. Best of all is a visit to the school so that details may be discussed with current students and faculty, but that happens later in the process.

    Like

  38. Carolyn Dicey Jennings Avatar

    In case you didn’t notice, when I changed the above to brackets it likely ameliorated this particular problem. That is, NYU is now in a bracket between 14 and 24. (Compare this to the ranking system, which, following the updates, now puts it at rank 20.)

    Like

Leave a comment