A few days ago I posted a list of features that I take to be essential to an ideal report on placement, seeking comments and suggestions. One of the features I mention there is recency. All departments are likely to place more candidates given more time, but this slope is steeper for certain departments. Moreover,  placement varies year to year. Thus, one's choice of time frame can substantially alter data on placement. This is the reason that Brian Leiter's numbers for NYU look better than mine (here and here)–I looked at the years 2012 to 2014 (3 years in the recent past), whereas he looked at the years 2005 to 2010 (6 years in the distant past).* Looking at NYU's placement page, one can easily see that the percentage of graduates placed in tenure-track jobs drops as one reaches the present. As I said, this is likely true for all departments. This means that if you look at data in the distant past, it might not matter what the length of the time frame is, but if you look at data ending in the recent past, the length of time frame makes an impact. That is, for NYU for the years starting in 2005, a 6-year time frame has 87% TT placement, a 5-year time frame has 90% TT placement, a 4-year time frame has 88% TT placement, and a 3-year time frame has 90% TT placement. But for the years ending in 2013, a 6-year time frame has 69% TT placement, a 5-year time frame has 65% TT placement, a 4-year time frame has 56% TT placement, and a 3-year time frame has 56% TT placement. Note that even the 6-year window ending in 2013 is associated with much lower placement than any of the windows starting in 2005. It seems obvious to me that we should favor more recent data, since they reveal which departments place students more quickly than others and since they are more relevant to students looking at graduate programs. Beyond that, it is not obvious just what length of time we should choose (3, 4, 5, or 6 years) or just which year we should use as the endpoint. 

Yet, one's choice of time frame has a large impact on comparative placement data. Let's compare NYU's placement page to the placement pages of those departments that I found with these methods to have the highest tenure-track placement rates: BerkeleyPrincetonPittsburgh HPS, and UCLAIf we look at NYU's worst time frame it comes out behind all the others (2010-2013: NYU 56%, UCLA 59%, Berkeley 63%, Princeton 65%, and Pittsburgh HPS 88%). If we look at NYU's best time frame it comes out ahead of all the others (2006-2009: NYU 94%, UCLA 67%, Berkeley 78%, Princeton 86%, and Pittsburgh HPS 93%). If, on the other hand, we look at multiple time frames then a new type of comparison is possible. We can determine, for example, which department has the least low value for tenure-track placement, given any time frame in the period from 2005 to 2013 (with a 3-year minimum time frame and a 6-year maximum time frame). In that case, Pittsburgh HPS comes out on top. It's lowest value is 85%. In comparison, the lowest value for Princeton is 65% (2010-2013), the lowest value for Berkeley is 59% (2009-2012), the lowest value for UCLA is 52% (2009-2012), and the lowest value for NYU is 56% (2010-2013). So if we look at the least low placement for all of these time frames, NYU comes out second to last. Finally, if we look at the full range, from 2005 to 2013, NYU comes out in the middle (Pittsburgh HPS 93%, Princeton 76%, NYU 74%, Berkeley 70%, UCLA 65%). 

Suffice it to say, these decisions make a substantial impact on one's results. For that reason, one should attend carefully to justifications on recency and time frame. I will remove the links to Brian Leiter's two posts on placement data here, since I am concerned that they will mislead students. If I had written those posts, I would certainly take them down knowing what I have made clear in this post (i.e. that the numbers for NYU are inflated for the very time frame that Brian Leiter chose to look at, relative to other departments). I have emailed Brian a link to this post.

As for my data, I use the years 2012 to 2014 because those are the most recent years and the years for which I have large data sets. (ProPhilosophy was kind enough to email departments directly in 2012 and 2013, which substantially increased the number of reported hires for those two years.) To go prior to 2012 I would have to either look at individual placement pages for all 118 departments, many of which do not have data of the sort I need, or use what I know to be a skewed sample from the Leiter Reports blog. I have made clear that any rankings I produce are a work in progress and should not be taken as authoritative. (That is one reason I post them to blogs, and not an independent website.) But as time goes on and this process is improved I will have to start making decisions about which time frames matter. I may well follow the lead of David Marshall Miller in reporting multiple time frames, since this might be helpful for students. Suggestions on this point are welcome. (The data that I used for this post are after the break. Feel free to suggest corrections where needed.)

*I hope that this does not need saying, but I am not picking on NYU here. One of my dissertation advisors was at NYU and one of my best friends is currently a student there. I am looking at NYU because it appears to be a focal point in Brian Leiter's criticism of my work. If one were to look at other measures beyond just tenure-track placement, NYU may well fare better than it does here. 

Update (7/14/14): In order to satisfy the worry that NYU is particularly burdened by graduates of the JD/PhD program in this measure (2 graduates from NYU left academia for law in this time period, compared to 1 from Princeton, 3 from Berkeley, and perhaps 2 from UCLA), I compared NYU to these other programs while leaving out all those graduates who left academia. In that case, as I point out in the comment below, it is still clear that time frame matters and, in particular, that the time frame of 2005-2010 overly inflates NYU's record (2008-2013 puts NYU in the middle of the group, at 80%, whereas 2005-2010 puts it at 95%, square with Berkeley and Pittsburgh HPS, ahead of UCLA and Princeton. It might be worth noting that with the same methods Fordham University placed 69% of its graduates into tenure-track jobs between 2008 and 2013). See my comment below for details.

  NYU UCLA Berkeley Princeton Pittsburgh HPS
  TT Placements Graduates TT Placements Graduates TT Placements Graduates TT Placements Graduates TT Placements Graduates
2005 2 3 4 4 2 3 5 5 4 4
2006 4 4 2 4 2 3 8 8 8 8
2007 3 3 4 5 2 3 4 5 3 4
2008 5 6 3 3 5 5 9 11 0 0
2009 5 5 1 3 5 7 3 4 3 3
2010 1 2 3 4 2 2 2 4 1 1
2011 4 6 4 6 4 7 5 5 2 2
2012 4 6 3 8 4 7 7 11 2 2
2013 1 4 6 9 2 3 8 14 2 3

 

(6 years) NYU UCLA Berkeley Princeton Pittsburgh HPS
2005-2010 87% 74% 78% 84% 95%
2006-2011 85% 68% 74% 84% 94%
2007-2012 79% 62% 71% 75% 92%
2008-2013 69% 61% 71% 69% 91%
(5 years)          
2005-2009 90% 74% 76% 88% 95%
2006-2010 90% 68% 80% 81% 94%
2007-2011 82% 71% 75% 79% 90%
2008-2012 76% 58% 71% 74% 100%
2009-2013 65% 57% 65% 66% 91%
(4 years)          
2005-2008 88% 81% 79% 90% 94%
2006-2009 94% 67% 78% 86% 93%
2007-2010 88% 73% 82% 75% 88%
2008-2011 79% 69% 76% 79% 100%
2009-2012 74% 52% 65% 71% 100%
2010-2013 56% 59% 63% 65% 88%
(3 years)          
2005-2007 90% 77% 67% 94% 94%
2006-2008 92% 75% 82% 88% 92%
2007-2009 93% 73% 80% 80% 86%
2008-2010 85% 70% 86% 74% 100%
2009-2011 77% 62% 69% 77% 100%
2010-2012 64% 56% 63% 70% 100%
2011-2013 56% 57% 59% 67% 86%
Posted in ,

58 responses to “Why Recency and Time Frame Matter”

  1. David Chalmers Avatar

    carolyn: as you know, i think your analyses are valuable. but i think it’s wrong to value recent results over long-term results. long-term results are much more important here than recent results.
    in the case of NYU, which you focus on here, many candidates (including three or four in your 2010-13 group) take attractive research postdocs, often without going on the job market more widely. that’s a very good result, not a bad result, and shouldn’t count against a department in these analyses. then some people (including one 2013 graduate in this group) take a temporary teaching job for a year or two. if they end up in a t-t job (as is pretty much certain in this case), that’s a very good outcome. another thing worth noting is that NYU has an active JD/PhD program, with the inevitable result that some candidates from this program (including two in your period, i believe) take highly attractive positions in law firms without going on the academic job market (where they could easily have obtained a t-t job if they wanted). looked at this way, i’d say it’s very likely that in three years or so, all but two of the NYU graduates from your problematic 2010-13 period will have obtained either t-t jobs or positions more attractive to them — which isn’t so different from the results of your golden 2006-09 period (where all but one obtained such a position).
    i don’t mean to be blowing NYU’s horn, and you should feel free to take my possibly-partisan judgments with a grain of salt. of course NYU’s placement record is imperfect and it may well be that other departments will do as well or better over certain periods, especially once their placement outcomes are examined under a fine-tooth comb as i’ve done here. but i do think it’s important to focus on long-run outcomes in these analyses.

    Like

  2. Derek Bowman Avatar

    One thing I would caution you to shy away from is thinking that there is anything like a ‘right’ answer as to ‘which time frame matters.’ For that reason, I think reporting multiple time frames is best.
    As you note, placements vary from year to year and, because the absolute numbers for most departments are so small, the choice of time frame will change the numbers dramatically. What this should show is that, while the numbers are useful for starting a conversation, we need to avoid fetishizing any collection of data as the ‘hard numbers.’
    Speaking from my own experience, the number of changes that took place during my seven years at the department I graduated from (including changes in faculty roster, changes in grad school funding policies, changes in department placement services, and a significant downturn in the global economy) mean that there is really no recent time period you could select in which the job placement ratios would not be either anomalous or no longer relevant.
    I don’t think that’s a reason not to try to compile and analyze this data. But I do think the data is likely to be more useful at analyzing trends for the whole field than it is at evaluating particular departments at particular time slices. At the level of individual departments and individual student choices, I think the data is more useful in raising questions than in answering them. In that respect, I think the current, in-progress and contested state of job placement measures may be better than a more standardized set of numbers which can be mistaken for the final word on the matter.

    Like

  3. Carolyn Dicey Jennings Avatar

    “i think it’s wrong to value recent results over long-term results”
    This is a false dilemma. The question I raise is about recent past versus distant past with respect to relative placement. For relative placement, recent past seems more valuable information than distant past. NYU does worse in the recent past even if we use long-term measures. Look at the 6-year period ending in 2013, for example. This is recent “long-term” data in which NYU comes out in the middle of these programs with 69% placement, which is far below the 87% that Brian Leiter reports (You can’t see this in the table above, but they come out half a point behind Princeton using that method). If one looks at the full period from 2005 to 2013, NYU also comes out in the middle with 74% (also far below Brian Leiter’s reported 87%). The point about postdoctoral and other temporary positions should be just as true of any of these other prestigious programs. If you look at any time period but a time period 5 or more years ago, NYU does not come out ahead here. That is the point. Why use that particular window? It only deceives students as to the point in question.

    Like

  4. Carolyn Dicey Jennings Avatar

    Some good points here. Thank you.

    Like

  5. John Schwenkler Avatar

    Hi Carolyn, I imagine Dave would say that the case of NYU might be disanalogous from that of other departments insofar as more of NYU’s students are offered prestigious postdocs that they accept in lieu of TT positions, or decline less attractive TT positions in favor of temporary ones because they’re confident in being offered other TT positions down the road, in addition to what he says about JD/PhD students taking positions in law firms. Of course it would take more study to support any of this. But as you know, this is one of the very general problems with ranking programs based on placement, namely that it’s extremely hard to make apples-to-apples comparisons.

    Like

  6. Carolyn Dicey Jennings Avatar

    I suspect that if you ask graduate students whether they would rather have a tenure-track job or a postdoctoral position, all else being equal, they will say the former (of course, one doesn’t normally have to choose). If students are placed into prestigious postdoctoral positions that is a good thing. But that means that they probably did not yet have a tenure-track offer that they were willing to accept. Yet, in other programs, such as Pittsburgh HPS and Princeton, students are getting tenure-tack offers that they are willing to accept. And they are getting them right away. That seems preferable to me. If we look too closely at any one program it can start to seem like we should change all the other metrics to suit them. But if we are going to use comparative methods at all, we have to think as objectively as possible about these issues. This is why neutrality and transparency are so important, and why having a clear interest in the result can be so pernicious. One way to look at the issues you mention in an objective way would be to try to determine which candidates sought tenure-track placement. The candidates themselves might be a good resource here, but an imperfect one. Do you have any ideas on how to capture this data?

    Like

  7. John Schwenkler Avatar

    Carolyn, I agree with much of what you are saying here, though part of my point was that if X has a higher standard than Y for what counts as an acceptable TT offer, than the fact that X didn’t receive such an offer whereas Y did doesn’t necessarily reflect on the “placement” of their respective departments. But again, I freely admit that this is just speculation.
    Regarding your broader questions, as I think you know I’m skeptical of the utility of comparative methods in this case. I think it is good to have a resource where placement data are aggregated, ideally having been reported as consistently as possible from one department to another. And students should pay attention to these data in choosing where (and whether) to go for graduate study, albeit with the huge caveat that an enormous factor in a department’s placement record is the ability that students themselves have when they enter a program, and placement data are bound to reflect this. But issues like the ones being raised here and elsewhere, as well the one I’ve just noted, seem to me like good reasons against ranking departments based on placement, because they mean that there are just too many contingent, hard-to-discern differences behind the numbers we’re trying to compare. Of course there are plenty of other good reasons that point the opposite way, e.g. the need to supplement rankings based solely on reputation. I’m not sure, though, that the best way to do the latter is through another ranking, as opposed to a clearinghouse for (neutrally collected and transparently displayed, as you say) some more relevant data. For my part, I think this might also be a much better way to present the reputational data collected for the PGR.

    Like

  8. John Schwenkler Avatar

    PS. For the record: I know Carolyn, admire her philosophical work, and am in awe of the time and effort she’s put into improving our grasp of what’s going on in the philosophy profession. I also think that the recent attacks against her, by Brian Leiter and others, have been utterly shameful. This is why I’m trying to engage them more constructively.

    Like

  9. Eric Steinhart Avatar

    The method seems wrong to me. I would score the number of placements relative to the number of jobs available per year, thus obtaining a normalized measure. I see no reason at all why time would have anything to do, either positive or negative, with the metrics. Unless you were to construct moving supply-demand curves (how many new jobs, how many new seekers versus old seekers still on the market, etc.).

    Like

  10. Carolyn Dicey Jennings Avatar

    (and @David Chalmers) I ran a new test to check for this. If I remove all those graduates that were placed into law firms or other non-academic jobs, the situation is similar to what I say above. In NYU’s best time frame (2005-2007), it ties with Princeton for 100% placement. In NYU’s worst time frame (2011-2013), it comes out behind both Pittsburgh HPS (86%) and Princeton (71%) at 69%, coming out ahead of Berkeley (67%) and UCLA (62%). For least low value, NYU ties with Princeton (69%), behind Pittsburgh HPS (85%). For overall percentage, NYU (85%) comes out behind Pittsburgh HPS (93%), with Berkeley at 82%, Princeton at 81%, and UCLA at 73%. For the 6-year time frame ending in 2013, NYU still has a placement of only 80%. Both Berkeley (82%) and Pittsburgh HPS (91%) have higher values for this period. So, again, I think that using the time period of 2005-2010 overly inflates NYU’s appearance in a way that is deceptive to graduate students.
    One important result of my work, that I have no doubt is bothersome to many people, is that it highlights programs that have high placement but that are not recognized by the PGR. One such program is Fordham. If I use the very same methods I use for these other programs, Fordham has a tenure-track placement rate of 72% over this entire time period (2005-2013) (recall that UCLA’s overall placement for this period is 73%) . Their best time frame is 2005-2009 with 86%, and their worst time frame is 2010-2012 with 56%. I was led to these findings by sheer curiosity–I wondered how others did on the job market. I find it striking that more is not being said about this.
    Update: I want to add that choosing what analysis to perform on the basis of interested parties is not good practice, for obvious reasons. In this case, the fact that candidates leave academia may just as well speak against a program. I do not plan to use this method without independent justification in the future. I only use it here to demonstrate that the point I make above stands even if we change the goalposts to suit interested parties.

    Like

  11. Carolyn Dicey Jennings Avatar

    I very much appreciate this public support, although I regret its necessity.

    Like

  12. John Schwenkler Avatar

    Carolyn, this second point is really important, and it’s astonishing to me that anyone could find it bothersome (though I expect that some do).

    Like

  13. Owen Flanagan Avatar
    Owen Flanagan

    Yes. It is notable in your data that Fordham, Oregon, and Vanderbilt do quite well on placement. I gave colloquia at Fordham and Oregon this past year and was impressed by the quality of intellectual life at both places. There are strong philosophers who do critical race theory, American philosophy, feminism, and comparative philosophy as well as core at all three places.

    Like

  14. Steve Avatar
    Steve

    My undergrad professors told me (back in the 90s) that the very best placement records were attached to programs at Catholic universities, since they attract and train candidates especially we’ll for TT jobs at Catholic colleges.

    Like

  15. Carolyn Dicey Jennings Avatar

    Eric Schliesser made the suggestion to look at hiring networks, and I hope that this is possible at some point. This might reveal phenomena like the one you mention here.

    Like

  16. Carolyn Dicey Jennings Avatar

    I think that time matters in so far as departments do better or worse at different times. This may be due to anomalies that will not impact prospective graduate students, such as a particularly excellent cohort or a particularly bad market. But it may also be due to changes in the department or the field at large that could impact a prospective graduate student’s chance of finding placement. In any case, I think time can certainly make a difference here, and not just in the sense of available jobs. I think the normalization idea is a good one, although it might have to be determined by AOS, which would be tricky. Knowing just where graduates apply would be very helpful here, but it’s unrealistic to think I will ever have access to that data.

    Like

  17. anonymousforthis Avatar
    anonymousforthis

    I admire CDJ’s efforts a great deal, and I think it’s very important that prospective graduate students have as much information (accurate and complete as possible) about what each individual program is doing in terms of placement. To that end, I hope that your efforts here will lead to more transparency on the part of departments about all of this.
    But there are so many variables in play here that it’s hard to know how to do any rankings based on placement. I don’t know what I’d do, even if given full information by all the graduate programs in the world.
    Here are just two of many considerations that (I think? hard to know–I’m not following all the blog discussions) that haven’t yet been mentioned.
    1: It’s absolutely true that one aspect of modern job market hell is that many outstanding candidates spend a year, two or even three in post-docs before landing something tt. I have myself advised graduate students to take prestigious, low-to-no-teaching post-docs over the tt jobs they were offered, because (a) the tt wasn’t a very good fit for that student in the end; (b) the post-doc was prestigious enough and long term enough to allow space and time to publish; (c) coming from a mid-ranked PGR school, being at a post-doc would allow the student contact with others to write letters in a year or two. These are judgment calls that rely on fine details. I can’t imagine any way to figure this into rankings of programs.
    2: In my view, if someone has been on the market 1 or 2 (maybe 3) years, and held a post-doc in that time (or some of it), then I would expect the letters of reference to come from their graduate program, their work to still largely be the outgrowth of work in graduate school (dissertation and immediately post-diss). But beyond yr 3, I’d expect someone to have developed beyond their graduate school training, and hope they’d have at least one letter in their dossier from someone who was not on their diss. committee (exceptions there too). That means, as a very general rule of thumb, with lots of exceptions, if x school places someone in a post-doc, and then a year later in a tt job, I’d think of both the post-doc and the tt job as prima facie to the credit of the graduate program; but if they had a post doc for a year or two, and another, and so on, and land a very nice tt job let’s say 5 years post-graduate school, I’d be much more hesitant to attribute that person’s success to the department to the same extent as the first case. And I can imagine sensible people thinking this is completely wrong. What it’s hard to imagine is a way of calculating all of this into a ranking of programs based on placement, a ranking we could all agree gets what we need to get out of it.
    3: So yes, Fordham and the like do well placing in part because they have established connections with other Catholic schools. But there are a whole swath of philosophers for whom that will be the relevant aspect of their placement record (not the mere fact that they do well in placement) because they are persons who, say, are lbgtq …

    Like

  18. anon junior Avatar
    anon junior

    It’s really helpful to be able to look at your data so directly. Independent of your conclusions, this transparency is a huge methodological gain on other ranking systems.
    So kudos for sharing the data so thoroughly. But looking at it gives rise to a question. If we compare the total number of graduates in these programs over different periods, we start to see weird patterns. Here’s my re-sorting of your data above:
    total number of graduates per school per 6 year period
    years NYU UCLA UCB Pitt_HPS
    ’05-‘10 23 23 23 20
    ’06-‘11 26 25 27 18
    ’07-‘12 28 29 31 12
    ’08-‘13 29 33 31 11
    total number of graduates per school per 3 year period
    years NYU UCLA UCB Pitt_HPS
    ’05-‘07 10 13 9 16
    ’06-‘08 13 12 11 12
    ’07-‘09 14 11 15 7
    ’08-‘10 13 10 14 4
    ’09-‘11 13 13 16 6
    ’10-’12 14 18 16 5
    ’11-‘13 16 23 17 7
    Notice something weird about the Pitt HPS data. The total number of graduates from each of the three other programs increases steadily. But Pitt has a significant decrease over time.
    I don’t know what the story is at Pitt, but I presume there is some reason why there were so many more graduates in earlier years than there have been recently. I suspect we would see similar bursts if we had comparable data for many other schools. Lots of things can explain this: a senior faculty member leaves and a cohort of grad students go with them; the university reduces funding and superannuated students rush to finish all at once. (Again, I have no idea what Pitt’s story is. I am speculating about the general point.)
    Whatever the cause, we can at least say this: a smaller number of grads is generally an advantage for placement percentage. Fewer grads means more time the placement officer and letter-writers can invest in assisting each grad. Therefore an ideal metric would somehow factor in the total sample size — placing 80% of 5 grads is in some ways less of an accomplishment than placing 80% of 25 grads. Reporting the percentage alone can be misleading in some circumstances. (This isn’t a knock on Pitt. Note that Pitt had high placement percentages even in earlier years with larger grad populations.)
    But a more important point: the shift in Pitt’s total grad size drives home that small Ns lead to noisy data. If a program only has 5 grads in a three year window, then entirely personal factors (e.g. a distracting divorce, a two-body problem, an ardent but impractical refusal to live anywhere but Idaho) could drive 20% of that school’s score. This seems like a reason to use time windows that are as wide as possible in order to generate larger populations. Given that none of the 3-year-school-populations above exceeds an N of 23, I am skeptical of the value of any particular 3-year-window comparison.
    These are meant to be constructive criticisms. I think that the most useful statistic you have is for the largest period (2005-2013). But in the final analysis it might be good to report this alongside a graph of three-year-moving-average, which should point to patterns in a department’s recent performance. The trick will be to represent both the moving average and the total population size. (Consecutive pie charts with N-scaled radii might work.)

    Like

  19. David Chalmers Avatar

    hi carolyn,
    again, i think your analyses are terrific and valuable. and i don’t mean to be too much of an NYU partisan here. it’s entirely consistent with my evidence that there are other departments that do at least as well in long-term placement as NYU. and i think your data is especially valuable for bringing out the way in which lower-ranked departments (e.g. fordham) do much better in placement than many would antecedently have expected.
    that said, i think there’s an obvious problem with the claim that recent-past information provides a more accurate measurement of relative placement, at least when recent-past information is restricted to t-t placement. for many students in strong departments, research postdocs are at least as attractive as immediate tenure-track jobs. (you say that the latter “seems preferable to me”, but obviously that’s a value judgment that may not be universally shared.) it happens very frequently at NYU that a student is offered and accepts such a postdoc before going on the broader job market. it seems likely, though i don’t have hard data, that this happens more often at certain programs (including NYU) than others. i don’t say this is a fair system — it may well reflect unfair differential opportunities for students in certain programs — but it certainly shouldn’t count as a reason for prospective students not to attend those programs.
    of course it’s very hard to come up with a perfect metric and i appreciate the difficulties of what you’re doing. two alternatives to accommodate the postdoc issue would be (i) “recent placement” metrics that group together research postdocs and t-t positions and (ii) “long-term” metrics that start, say, three years post-phd (unlike your “long-term” 2007-13 data in the comment above, which are affected by the same issue). but no doubt these metrics would also have their problems — e.g. (ii) would tend to reflect past glories rather than present status. so i’m not recommending any metric as a universal panacea. but by the same token, i think it makes sense to recognize the particular misleading effects that recent-tt-placement measures are subject to.
    anyway, i hope you keep up the good work. before long we should have expanded the philjobs appointments database in such a way that all sorts of different statistics about placement will be easily measurable and available.

    Like

  20. Carolyn Dicey Jennings Avatar

    This is something I have seen, too. I once made a promise to myself not to give up before I had gone on the market at least three times, since it seems that is what it takes for a good many people to get tenure-track jobs. It is easy to get discouraged after even just one year on the market, and I made that promise to myself to keep my head straight when times got tough. (But in fact, I was very lucky.) I think this is likely to be true for just about any department, which is one reason I don’t understand the claim that we should look at years long past. Assuming that placing people into tenure-track jobs sooner is better (putting aside for a moment the question of “quality”), we will need to look at the recent past in shorter time scales (e.g. 3 years) to get a sense of which departments do this successfully.
    This is a really excellent point. I am not sure what to make of it yet, but I will think about it.
    I don’t know enough to say whether Catholic universities (or universities “in the Jesuit tradition,” like Fordham) are more or less discriminatory towards LGBTQ philosophers. I interviewed at two Catholic institutions. I got the impression in both cases that this would not be an issue. In one case, the interviewers were very professional and seemed well-trained in how to interview candidates–it did not seem like the kind of group that would allow discriminatory questions to pass muster. In the other, the administration and faculty were openly supportive of LGBTQ (and some of them were proud representatives of that group). I do think that these universities have a tendency to value the history of philosophy more than is standard, and that some students will not be interested in the types of jobs that value the history of philosophy, whereas others will. This is all to say that you are right that these networks of placement matter and that students will want to know more about these things, but I would want to see just what is true about those networks.

    Like

  21. Carolyn Dicey Jennings Avatar

    Good points. I looked at the NRC data, and Pittsburgh HPS had an average of 3.8 graduates per year between 2002 and 2006. That makes 05-07 look like an unusually large group. I think that you are right to be worried about small samples. Perhaps a 3-year window is too small. And yet I find it helpful for revealing which departments manage to place candidates more quickly than others. The above case is not a perfect one for demonstrating this, since those are all prestigious departments. But I suspect that the shorter time window will help us to see the difference between a department like Pittsburgh HPS and a department that places most of its graduates in tenure-track programs, but only after several VAPs.

    Like

  22. Carolyn Dicey Jennings Avatar

    This might be a place where a survey of graduate students and/or job candidates would be useful. If there is a contingent of students/candidates that find postdoctoral positions to be equal to or more attractive than tenure-track jobs, all else being equal, then it might be relevant to treat them as such. I will think about how to look into this. I doubt for myself that this is true of most prospective students, but I would have to actually hear from these groups in some sort of official measure to be sure. In any case, I think the buffer that we leave should reflect what prospective graduate students take to be ideal, whatever that is. Let’s say, for the sake of argument, that prospective graduate students take a three-year window between graduate school and a tenure-track job to be ideal. Placement pages almost never provide year by year data on past graduates. Thus, we will likely have to look a few years into the distant past, and provide a buffer that way. If three years is ideal, the buffer should likely be situated right around that ideal number of years, rather than taking that number of years as a minimum. That is, for a three-year time frame we would want to look at graduates between either 2010 and 2012 or 2011 and 2013. This gives the 2010 cohort 4 years, the 2011 cohort 3 years and the 2012 cohort 2 years OR the 2011 cohort 3 years, the 2012 cohort 2 years, and the 2013 cohort 1 year. The above analysis already provides this approximate buffer, even for the shortest, most recent time frame. Thus, if anything, the analysis already provides an advantage to this ideal over the one I presume that most graduates favor.

    Like

  23. Carolyn Dicey Jennings Avatar

    As for my other work, it might be helpful to know that the methods there also include a buffer. That is, I looked at tenure-track placements between 2012 and 2014 and compared that to the number of yearly graduates from 2009 to 2013. Thus, graduates were represented from cohorts between 1 and 5 years ago, for an average of 3 years between graduation and placement. To look at the years 2005 to 2010 is to provide a buffer of between 4 and 9 years, for an average of 6.5 years, which I would be surprised if any graduate student found ideal.

    Like

  24. David Chalmers Avatar

    “That is, for a three-year time frame we would want to look at graduates between either 2010 and 2012 or 2011 and 2013”. that doesn’t seem quite right to me. i’d think that given the relevant assumptions about ideals (where research postdocs are just as valuable as t-t jobs within the first three years), we’d want to either (i) look at data ending three years ago, so around 2011, depending on time frames, or (ii) group research postdocs with t-t jobs for recent phds. otherwise we’d get the same problem: recent (post-2011) phds in research postdocs would count as a negative in the statistics (as they do in your analyses), when according to the stipulated ideal, they shouldn’t.

    Like

  25. Carolyn Dicey Jennings Avatar

    If the ideal scenario is one in which one has a minimum of 3 years, or between either 3 and 6 or 3 and 9 years between graduating and being placed in a tenure-track position, then choosing 2011 as an endpoint would be right. But recent PhDs push against the “3 years is valuable” assumption just as much as PhDs from more than 3 years ago, so these groups have to be held in balance. Hence my assumption that we should look at the group that is an average of three years out. Again, assuming that 3 years is the ideal, which I doubt.

    Like

  26. Carolyn Dicey Jennings Avatar

    Perhaps I should clarify another assumption of mine–postdoctoral positions are only equally valuable to graduates within a certain period of time, after which they become less valuable. I am assuming a three year period as the divider here for your sake.

    Like

  27. Anon Avatar
    Anon

    Just a quick comment on the earlier discussion of T-T jobs versus Post-Docs. I’m a PhD student at an Australian research university. My plan for the job market is to seek out first and foremost a Post-Doc position. It will give me more time to research and publish and hence land a better T-T job. That is, I would take an attractive Post-Doc over a T-T job with a 4/4 or 5/5 teaching load. Since I’m only one student I hardly make up an adequate sample size for a general claim about the value of Post-Docs. Having said that, I know that many of my friends in graduate school would do the same.

    Like

  28. Carolyn Dicey Jennings Avatar

    This is helpful, Anon. As I see it, this is not the sort of claim that would support what David Chalmers wants to say. Namely, that all else being equal a student would rather have a postdoctoral position for a certain number of years. You say that you want a postdoctoral position to get a better tenure-track job, but you would need to say that you would rather have the postdoctoral position for a certain number of years, even if your chances were the same. Or so it seems to me. If you were to guess, what would you say to be the ideal number of years for most graduates between graduation and landing a tenure-track job, all else being equal?

    Like

  29. Anon Avatar
    Anon

    This is a tricky and I’m hesitant to speculate but here goes: It depends on what we mean by “all else being equal.” If by that we’re comparing an attractive 2 or 3 year Post-Doc to a T-T job with a 2/2 or 2/1 teaching load at an R1 or R2 research school then of course I take the T-T job (and I’m confident all of my PhD friends would too). If my chances of landing such a T-T job turn out to be the same before or after the Post-Doc then I take the T-T job (though it’s hard to imagine how they could be the same if I’m publishing during my Post-Doc).
    But we can make things slightly more complicated to help show how much I (and again I think most of my friends) value Post-Docs. If I’m offered an attractive Post-Doc and an attractive T-T job at the same time then I take the T-T job. But if I’m able to negotiate completing the Post-Doc before taking up the job offer then I definitely do that. Or suppose I have 3 options: (1)Attractive Post-Doc; (2)Attractive T-T job; (3) Attractive Post-Doc and Attractive T-T job. I pick (3) every single time. Maybe for those students fortunate enough to be at programs like NYU they find themselves able to take (3) more than graduates from less Leiter-ific programs?
    I’m not sure if any of that makes sense. But to answer your question: No one expects to land a job their first year on the market. I would say that landing a T-T job between 2 and 4 years after graduation is ideal (at least on today’s market), all else being equal.

    Like

  30. Carolyn Dicey Jennings Avatar

    In case my point isn’t clear: we are asking here how to compare someone from Princeton, say, who gets a TT job straight after graduating and someone from NYU, say, who gets a TT job after 3 years. The question is how to compare these. It may well be that one of these jobs is better than the other, but we are not measuring that here. We are only looking at TT job, qua TT job. Thus, the only reason to put Princeton and NYU on an equal footing, relative to these candidates, is if we think that it is better to have those 3 years than not. On average, I doubt that this is true. Nonetheless, as I say above, my data already assumes 2-3 years as ideal, just because of the way it is set up.
    As to your other other point: if you accept the TT job and postdoc at the same time, then the TT job is already being given the same weight as the TT job for the person with no postdoc in these metrics, since the metrics go by acceptance/report. So this is not the case we are talking about. We are talking about the case in which one ends up with only a postdoctoral position, and then a tenure-track position three years later. Is the department that graduates that candidate, or many such candidates, on an equal footing with a department that graduates candidates who get tenure-track positions right away?

    Like

  31. Christy Mag Uidhir Avatar

    Carolyn, I’m with David here (both in appreciation but also in concern) . The desirability of TT jobs over post-docs depends upon lots of factors. For example, when I left Rutgers in 2007 I was faced with just such a decision. I chose to accept 2-yr post-doc at Cornell over a TT job offer. The biggest factors that played a part in my decision were Teaching Load (TT: 3-3, PD: 1-0), Institutional/Departmental Environment (TT: Undergrad Only Teaching Oriented Department w/ Few Mentorship Opportunities, PD: Great PhD Program with Active Research Department w/ Several Mentorship Opportunities), and Money (TT paid only slightly more than PD). All things considered I figured that taking the post-doc would likely place me in a better market position in 2009 than I had in 2007 (and certainly better than the market position I could reasonably expect to have applying out of that TT job had I taken it rather than the Post-doc). From the handful of others I know to have faced similar choices—prestigious (2-3yr) post-doc (or visiting appointment) at research university vs. teaching heavy TT job at mid-tier SLAC or non-R1 state school—the post-doc invariably wins out, which I take as having less to do with some commonly held ideal as to time from graduating to TT job and far more to do with a commonly held ideal as to the kind of TT jobs graduate students (at least at certain programs) prefer. The worry then is that properly accounting for this requires making distinctions between the various kinds of TT jobs into which graduates place, something your analysis has yet to do (and the merits of which you yourself seem to be not especially convinced).

    Like

  32. Carolyn Dicey Jennings Avatar

    The problem that I see here is trying to account for differences in perspective on what counts as a good tenure-track position. Let me give you a personal example. I have a 2:1 position. My husband has no interest in a 2:1 position, and would prefer a 4:4. This is because my 2:1 comes with substantial research expectations that he would be happier without and because he sees himself as, first and foremost, a teacher. I know many people who fall into each of these camps. I definitely think that there are better and worse positions. But they are better or worse for different people. I do not think there is a single standard of better or worse that is universally accepted, or even accepted by the majority of graduate students. So here is our problem: how do we represent better and worse here, given these fundamental differences on what is better and what is worse? I would guess that some programs tend to attract graduate students who see one as better, whereas other programs tend to attract graduate students who see the other as better. And for some programs extra time before a tenure-track job is worse (less overall security), whereas for others it is better (more unimpeded research time). But I haven’t figured out how to quantify any of this yet. Suggestions are welcome. I wrote in this post that I will aim to categorize hiring institutions in the future. The best thing that I can think of here is making a placement ranking/grouping that depends on a choice on the part of the viewer (e.g. research focus versus teaching focus). But I am open to suggestions.

    Like

  33. Carolyn Dicey Jennings Avatar

    Perhaps it is worth noting that the point about “all else being equal” considerations would still hold even if we start categorizing tenure-track jobs. We don’t get to assume that the tenure-track job of the person who has a time lag between graduating and getting a job is better than the tenure-track job of the person who does not have time lag in between these. We will have to classify these jobs and stick to the value of that classification.

    Like

  34. Derek Bowman Avatar

    I guess I’m just not sure those questions have any interesting, general answers. In addition to personal variation, it’s not like these preferences are unaffected by the likelihood of various outcomes.
    If one can be reasonably confident that after 2-3 years of a prestigious postdoc one will have a good chance at a tenure-track job at the end, that’s very different from barely managing to secure a post-doc and thinking that you’re just going to have play the same long shot jobs lottery again each year.
    And frankly, I don’t know how much the discipline of philosophy is served by trying to find a way to rank the relative worth of ‘straight-to-TT’ vs ‘reliable-path-from-postdoc-to-TT’ at the schools with best placement. The most serious problems of placement and employment are those who have reason to think they may never get a secure position (and those who are unable to secure any form of full-time employment in the mean time).

    Like

  35. Carolyn Dicey Jennings Avatar

    I think both of these questions matter. As we have seen with the PGR, there is much to lose or gain from comparative claims about philosophy programs. My interest in justice and accuracy on seemingly minute points stems from my experience with watching the effects of that power. I want to be as objective as is possible and to provide information that is as neutral and as helpful to graduate students as is possible. I think that pushing for less temporary labor and longer-term contracts is very important. But that doesn’t make these issues unimportant, I think.

    Like

  36. Derek Bowman Avatar

    While the power of social hierarchies in academia are not exhausted by their effects on employment, philosophers with stable paychecks and a livable wage have a lot more space in which to form their own intellectual communities and pursue their own conception of what’s intellectually worthwhile (indeed, isn’t that part of the point of tenure?). And a labor market in which candidates can meaningfully think about what kind of job they want (rather than wondering what kind of job, if any, they can get) will do more than an attitude adjustment about prestige to allow people to find the jobs that suit their own needs and dispositions.
    I fully admit that my vision is probably too focused on the problems that are close to me and people I know. But for those of us just treading water in our careers, it’s hard to get worked up about which lifeboat has the best seats.

    Like

  37. HPSgrad Avatar
    HPSgrad

    Pitt HPS had one outlier placement year in 2006. Because of accidental happenstance (some finishing early, some finishing late), ten graduates sought jobs that year. All ten were successfully placed–nine into tenure-track positions, one into a prestigious post-doc. Otherwise, the average number of job-seekers is 3-4, not counting those moving from one post-graduation position to another.

    Like

  38. anon junior Avatar
    anon junior

    Another way to address my concern above (about the noise generated by small Ns):
    Make the size transparent to the audience. For each school, generate a simple line graph as follows. The x-axis corresponds to year, the y-axis to grads. One (upper) line charts the total number of grads on the market, while the lower line represents the number placed. Make the y-axis scale consistent across all graphs, so that the viewer can see at a glance how comparatively large a cohort each school has dealt with. At the same time, the difference between the two lines retains the information carried in a percentage.
    I would suggest that you make the y values cumulative from the first year. That is, the first data points are for ’05-’06, the second for ’05-’07, the third ’05-’08, etc. This allows the viewer to see some of the long-term trends, which smooths out noise in any particular year. But the viewer can still see fluctuations by noting the difference in slope between the upper and lower lines at any given point.

    Like

  39. Carolyn Dicey Jennings Avatar

    Also, Fordham is not “lower-ranked”–it isn’t ranked at all. It wasn’t even evaluated by the PGR for overall rankings, at least in 2011 (I didn’t check earlier reports). And yet the placement is about the same as UCLA’s for this full time period of 2005-2013 (72%), using the method of ignoring those who leave academia (I did not check using the other method). Brian Leiter posted that “a program placing about 65% of its graduates in tenure-track jobs is about average,” but either he did not read the very short article he links to or he is intentionally misleading his readers. The link says “about 35 percent of graduating humanities Ph.D.s were still seeking work and weren’t negotiating with any potential employers…Then again, job is a tricky word here. When the NSF asks students whether they have a definite commitment from an employer, it doesn’t differentiate between short-term or part-time jobs and stable, permanent work. In other words, it tosses together adjuncts and teaching fellows along with graduates who end up in the tenure track—meaning the real market might be even a bit worse than this graph lets on.” So 65% of graduates have some kind of academic job (more closely, 65% are not unemployed), not a tenure-track job. Brian links from his post to a chart showing Boston University at 63% (for the years 2005-2010), clearly implying that this is a below average placement rate. As I have shown in several posts, anything over 50% for tenure-track positions is actually relatively high for tenure-track placement. So Fordham and BU are doing very well, indeed on these measures. I just happened to choose Fordham on a whim. There may be other programs neglected by the PGR that do even better. That Brian Leiter continues to single out BU in this way, and to post misleading/deceptive facts about the job market, is a disturbing new trend.

    Like

  40. Gordon Hull Avatar

    I don’t know what Leiter thinks his reference to that chart proves, but it doesn’t prove what he says it does. The biggest problem is distribution – to get to an “average” (mean?) of 65%, you’d have to assume that the numbers are relatively even across disciplines, that disciplines had comparable numbers of graduates, and that there was some sort of normal distribution of placement rates. Outliers would skew the average – it could be that a relatively small number of programs place most of their graduates, leaving a long tail. The average would look inflated in that case (it’s like if we ask what the average income is in a room containing me, my wife, and Bill Gates). Since the data lumps all the humanities together, that is true across disciplines as well (if English placed all its graduates, then that would make the average look high, especially since there’s so many English graduates). At the very least, we need to look at medians and not just means.
    Also, like Carolyn says, the 65% lumps together all kinds of employment, including part-time. So the article demonstrates just what it says it does: that the job market for humanities PhD’s is horrendous – so horrendous that 35% of new graduates can’t even find part time adjunct teaching! But that obviously does not prove that a 65% placement rate in TT jobs is average. Intuitively, that seems like it ought to be way, way above average. I suspect that there would be a lot less worry in the world if an average program still had a nearly 2/3 placement rate into TT jobs.

    Like

  41. Christy Mag Uidhir Avatar

    Carolyn, the Fordham-UCLA comparison highlights one obvious and potentially misleading drawback of not distinguishing between the various sorts of TT positions into which a program might place its graduates. While Fordham’s general placement rate since 2011 may be just as good as that of UCLA’s placement rate, nearly 75% of the TT jobs into which those Fordham graduates were placed are at colleges or universities with overt religious missions and affiliations. That’s surely no accident. As such, it’s obviously misleading then to suggest Fordham’s placement rate comparable to that of UCLA’s when the former heavily favors placement into religious institutions that frequently require as a condition on employment a statement of faith the vast majority philosophers do not have.

    Like

  42. Carolyn Dicey Jennings Avatar

    And yet, if a student just wants to know which program has a better chance of getting tenure-track positions for their students, the programs look similar for that time period using those methods. It would surely help students who are not religious to know if a program has a hard time placing such students (likewise for religious students, marital/family status, political beliefs, LGBTQ, women, racial and ethnic minorities, etc.). But I don’t know which programs require a statement of faith and I don’t see a reason yet to suppose that a student who went to Fordham who was not willing to make such a statement would have a harder time getting a job than a student who was, rather than supposing that there is a meeting of student interest and availability here (that religious students are attracted to Fordham and to positions at religious institutions, and that Fordham facilitates just those sorts of jobs). I am open to seeing evidence of this, and I do think it would be valuable. I just don’t see it. A great many universities were founded by religious groups, and a great many of these do not require statements of faith (I did not have to make a statement of faith for any of the many universities to which I applied). That is what I know. If you have time to look into this, let me know what you find.

    Like

  43. philosopher Avatar
    philosopher

    Here is the link to the mission statement at Xavier College, a Jesuit College in Cincinnati.
    http://www.xavier.edu/mission-identity/heritage-tradition/
    It is directed at students, but clear faculty are expected to uphold and advance the mission.
    The few interviews I had at Catholic colleges in the USA inevitably included questions about Catholic philosophers (Maritain, for example). One may wonder what the purpose of such questions are. Clearly, someone schooled at a Catholic University can answer these questions quite readily. Others?! … maybe not so easily.

    Like

  44. Carolyn Dicey Jennings Avatar

    a follow-up: As to the data gathering efforts, I do intend to report something like Carnegie Classification next year, checking this against departments. As I have noted elsewhere, I wanted to do this in 2012 but have not done so because of the time intensive nature of such an endeavor. All of this takes lots and lots of time, especially when there are lots of special interests and exceptions to consider. (The story is obviously much different if you just look at a handful of programs and don’t care about accuracy, transparency, etc.) I think this will be better next year, since some people have reached out to me to offer help. But I cannot do it this year. On that note, I plan to do much less blogging about this so that I can get back to my research, teaching, and other service work. But do post any further worries. I will surely read them and respond in time.

    Like

  45. Christy Mag Uidhir Avatar

    It’s been brought to my attention that only one of the TT jobs into which a Fordham graduate was placed since 2011 actually required an explicit statement of faith (Wheaton College). Point taken. Also, my comment was not intended to suggest that overtly religious colleges or universities (by this I do not mean merely affiliated with or founded by a certain Christian denomination but having a mission statement explicitly reflective thereof) not as good as their secular counterparts. The point is simply that there is further relevant information about Fordham’s placement record potential, specifically it’s heavily favoring overtly religious institutions and as such prima facie to a certain degree thereby favoring graduates of faith–which presumably most philosophy graduate students are not. Fordham’s placement record is impressive and the program should be commended, but it’s success strikes me as significantly narrowed in a way that UCLA’s presumably is not.

    Like

  46. John Schwenkler Avatar

    But Christy (if I may), I think that part of Carolyn’s point, and of her mention above of Eric Schliesser’s discussion of “hiring networks”, is that most every program’s success will be “significantly narrowed” in some ways: e.g. many smaller religious schools might, at least prima facie, prefer a Fordham graduate than one from UCLA, either because of the kind of training they’ve received or because faculty at the institution are just more comfortable hiring from a place they know better, etc. (This is, again, one of the reasons why I think rankings based on placement statistics are inherently problematic (because they obscure too many of these differences), but that’s another matter.)

    Like

  47. shane wilkins Avatar
    shane wilkins

    I second John Schwenkler’s point that most programs are going to have a specific niche in the job market that they serve. I’m a Fordham grad student, but my point isn’t to toot my own program’s horn. Rather, I want to make two points: First, Fordham has recently been placing in a broader range of programs than the small catholic liberal arts colleges that have been our traditional niche in the job market. For example, this year we placed one recent graduate into a TT position at a PhD granting program and another into an MA program. In the last few years we have placed people into a prestigious SLACs and fancy public universities in sunny, coastal states. Second, I think a large part of Fordham’s placement success is due to the fact that we produce grads with lots of teaching experience and broad coursework who are capable of filling a broad range of teaching needs as a junior faculty member. I think these factors are more important in explaining our placement success than demographic factors like religious affiliation.

    Like

  48. Christy Mag Uidhir Avatar

    John, I agree. My point was not that Carolyn ought to factor distinctions between kinds of TT jobs (and the values thereof) into her rankings but rather that omitting such information threatens to undermine the entire project if not actively mislead its consumers. The Fordham/UCLA comparison in terms or mere placement percentage I take to be misleading not because it fails to reflect that UCLA placed its recent grads into “better” jobs than Fordham but instead that it fails to reflect the fact that Fordham overwhelming tends to place into overtly religious institutions. Given that 70% of philosophers self-identify as atheist and might reasonably find such employment problematic. This becomes especially important when the particular faith upon which the university mission is based (and ostensibly held by a large portion of the student body) actively condemns or marginalizes–whether at the official institutional level or less formally manifest in the student/faculty/admin culture—certain beliefs, behaviors, and identities otherwise more or less institutionally ignored at secular schools (e.g., atheism, pre-marital sex, drinking, smoking, homosexuality, non-traditional gender identity, having certain racial or political affiliations, or simply engaging in activities that undermine or advocating positions counter to the institutional mission or the tenets of the faith itself).
    Again, that 75% of recent Fordham grads placed into overtly religious schools no more means Fordham’s placement record thereby less impressive than previously thought (or worse than UCLA’s) anymore than would UCLA placing 75% of its recent grads into Cal or Cal State satellites mean that UCLA’s placement record thereby impugned. In either case, however, using mere percentage of grads placed for evaluative or comparative purposes looks radically misleading if not to an almost criminal degree. Finally, that there are several highly relevant ways in which to categorize TT jobs for purposes of placement information (State/Private, National/International, Grad Program, Undergrad Only, R1, Teaching/Research Load, Religious/Secular, SLAC/RLAC, CC, etc.) isn’t a reason to ignore them all in favor of straight-percentage based evaluations and comparisons; rather, it’s a reason to think such evaluations likely to be to that extent uninformative, misguided, or even outright deceptive.

    Like

  49. John Schwenkler Avatar

    I do hope that some of this is intended as hyperbole or satire. (How in the world could what Carolyn is doing be in any way criminal (even “almost” so)?!) As to the charge that she’s being “deceptive” or “actively misleading”, etc., come on: I’m sure she expects that prospective students are smart enough to drill down into placement statistics and look at where graduates have been hired, to see whether this among other aspects of a given department fits with their interests and self-identity.
    What we do agree on is that comparing placement as “better” or “worse” is pretty much a worthless exercise, since the goodness of a given department’s placement record for student X’s purposes is always relative to too many features that are distinctive to her own situation, and might not apply to student Y’s. Then again, arguably the same holds, if not to quite the same degree, for ranking departments on the “overall quality” of their faculty.

    Like

Leave a reply to Carolyn Dicey Jennings Cancel reply