I applaud Brian Leiter's efforts to examine placement data in the past few days *Update 6/13/14: I have removed these links because I think that Brian Leiter's posts have the potential to mislead students. See my new post here*, as well as the efforts of David Marshall Miller and Andy Carson over the past few years. All of this is effort to improve the profession and deserves recognition as such. I plan to continue reporting placement data next year and will likely post the report to an independent website. Below is a list of features that I take to be essential to an ideal report on placement, together with some ideas for improvement on my own work. Please comment below!

1) the original data: as far as I know this is missing from both Brian Leiter and Andy Carson's efforts. This is important because it keeps the analyses honest by opening them up to public scrutiny. I have provided links to my data and will continue to do so. Recommendations on format are welcome here. 

2) the methods: key information is missing in Brian Leiter's presentation, such as the criteria for determining which placements are to "research universities and selective liberal arts colleges," but as far as I can tell David Marshall Miller and Andy Carson are clear and up front about their methods. I have tried to be clear about my methods, but I have received some emails that reveal shortcomings here. Recommendations welcome. 

3) completeness: Brian Leiter's efforts, as of this moment, include only a few departments (that were not selected at random). An ideal report should include all the philosophy departments that have made placements of the type in question, which is something David Marshall Miller, Andy Carson, and myself have all tried to do. What is missing from all of our reports is complete placement data. PhilAppointments is not a complete source, for example, but neither are placement pages. Further, placement pages are often missing key data points on placement (such as names, which help to identify duplicate candidates). Next year I aim to cross-reference PhilAppointments with individual placement pages. Recommendations on how to efficiently improve completeness are welcome. 

4) recency: since these efforts are in their infancy, it is currently unknown what time frames are relevant. Recent data are ideal, so long as recency is balanced with completeness. Brian Leiter chose a 5-year time frame between 2005 and 2010, which I see as a drawback of his report. Although David Marshall Miller, Andy Carson, and myself have all used the most up-to-date data, David Marshall Miller also looked at different time frames. In the future, with more data, the use of time frames should help us to determine how recent our data needs to be. Recommendations on how to proceed with time frames is welcome here, since next year the data set I have will be in its fourth year (2011-2015).

5) neutrality: Those collecting, analyzing, and reporting the data should be as neutral as possible with respect to hypotheses and results. I have concerns about this with respect to Brian Leiter's report, especially given the absence of 1 and 2. The fact that David Marshall Miller, Andy Carson, and I have performed this work on our own is also potentially problematic, even with the inclusion of the original data and methods. Over the next year I plan to form a task force to work on placement data, composed of several people who have reached out to me over the past week or so (but others are welcome). Having more people on the project should help with neutrality. Recommendations on this point are welcome.

Posted in ,

13 responses to “On Future Placement Data”

  1. Jenny Saul Avatar
    Jenny Saul

    Many thanks for doing all of this in such a careful and open way, and for inviting scrutiny and discussion about methods. This is a real service to the profession.

    Like

  2. Noelle McAfee Avatar

    I second Jenny’s thanks.

    Like

  3. anon grad Avatar
    anon grad

    Leiter is digging his own grave. He’s gonna soon lose too many people’s sympathy. Hope he could also retreat to his research and stop “digging his hole deeper”.
    http://leiterreports.typepad.com/blog/2014/07/more-thoughts-on-job-placement-in-philosophy.html

    Like

  4. shane wilkins Avatar
    shane wilkins

    Here’s a suggestion regarding methodology: The US News rankings aren’t a particularly reliable guide. Instead, I suggest breaking down the placement data in terms of the Carnegie classification of the schools. (Maybe lump all the associate-level classifications together, for legibility.) That would give you ten different boxes to put schools in, which seems fine-grained enough to give an informative picture, but coarse-grained enough to be done within a reasonable timeframe. I’m sure there’s a list of every institution in the US and its carnegie classification available somewhere.

    Like

  5. Gordon Avatar
    Gordon

    re: Carnegie ratings – I think that’s an interesting idea, but I’m not sure how helpful it will be on its own, because of differences in departments b/t universities. I can only speak anecdotally of my own case, but my previous job was at a top level RU/VH (the top classification in terms of research), but the philosophy department was undergraduate only. On the other hand, the RU/VH classification was probably what kept the contractual teaching load to 3-2 and certainly the institutional climate (both within the department and without) was of a research university. It was a great job.
    On the other hand, my current job is at a DRU (the bottom of the three doctoral classifications; the institutional aspiration is to go up), but there’s an MA in philosophy, and philosophy faculty routinely contribute to other graduate programs. You can tell that the university is transitioning towards a greater research focus – the RU-feeling isn’t pervasive yet. It’s also a great job – getting either of them (particularly as a first job out of grad school) would count as a really good placement for any grad students I’ve ever hung out with.
    I think the Carnegie status is probably a data-point that should be there, but it may not reduce the need for something else. I’m guessing it would be a huge pain (b/c this data would have to be obtained manually? where is big data when you need it…), but getting the teaching load, and whether a department is a undergraduate-only, MA, or PhD might paint a very nice institutional picture when combined with the Carnegie status.

    Like

  6. Mitchell Aboulafia Avatar

    Yes, thank you for doing this! I realize that these types of reports simply cannot address all of the factors involved in placement, but I want to at least acknowledge another one when we consider the success of grad programs: geographical distribution. Following someone like Leiter can leave you with the impression that success is a matter of being at a research university. This is anecdotal but it will give you an idea of what I have in mind. In many years at different kinds of institutions, including an RI and Juilliard, I have learned that people will often “sacrifice” a great deal to stay near major urban areas with vibrant cultural lives, e.g., NYC, Boston, San Francisco, etc. Of course these places are not to everyone’s taste, but many grad students make a call to teach at non-research institutions, including community colleges, in order to stay in such places. And they do it not just for their personal lives but because there is more interesting intellectual stuff going on than at, say, a Research I in the middle of some large state. Long story short: it’s too bad we can’t factor this in easily, as well as the question, did you get a job in a place that you wanted to live?

    Like

  7. Steve Avatar
    Steve

    I second the worry about relying on Carnegie classifications. My university is also a DRU, but there are no graduate programs in any of the humanities or social sciences at all, and the university has an official 4/4 teaching load. (All the Doctoral-level programs are in the professional, engineering, and tech divisions.)

    Like

  8. Carolyn Dicey Jennings Avatar

    These are helpful suggestions. I think it might help to start by asking students which categories matter to them. From what I can tell, the division of tenure-track and equivalent positions from temporary positions is a division everyone agrees is important. After that, I am not sure how to proceed. I think that candidates care about the balance between teaching load and research expectations, and perhaps these can be approximated by starting with Carnegie Classifications and then checking department by department. I tried doing this in 2012 and gave up (see my preview for stage 3, here). But it may be a possibility next year with extra help. I also think candidates have regional interests, as Mitchell mentions, but I am unsure of how to examine that. Perhaps mean distance of hiring institution from PhD-granting institution would be interesting? I will bring this up with the group once we form.

    Like

  9. grad Avatar
    grad

    I think that if you used mean distance as a measure, it would need to be restricted to domestic hires. A job in Oceania would really throw off the average. You could use the median instead.

    Like

  10. Chris Avatar
    Chris

    It is frustrating that placement data from some schools is not available and that people looking to compile data about this have to resort to indirect sources such as PhilAppointments. If a fairly large group of philosophers got together to create a placement ranking, maybe they could ask schools to provide placement data to them directly and “shame” schools who do not do this in some way.
    For example, the compilers could ask grad programs to provide a list of every graduate from the last x years and their current status. This could include categories for different kinds of academic jobs, categories for “chose not to pursue academic jobs, decided to leave the profession, currently unemployed, etc.”, and a category for grads who are truly unaccounted for and that the program has lost touch with. Then, if a school did not provide this kind of data, this could be noted in some way in the rankings–for example, by putting a large asterisk next to their name, or (more dramatically) by putting the data that can be gained on them indirectly outside the main ranking in some kind of appendix. Of course, the ranking could also explain all this clearly so prospective students who want to learn more about the “unreporting” schools could reach out to them directly for their placement information.
    Would this be a viable method?

    Like

  11. Carolyn Dicey Jennings Avatar

    I think so, although I don’t want to put undue pressure on departments. If they are already providing the information somewhere, it might be possible to simply use that information, given the resources to review it all. But these are good suggestions, thank you!

    Like

  12. Derek Bowman Avatar

    Who cares about Leiter’s attitude? The thing we should be highlighting out of that post is that:
    1. Shitty, low-paid, dead-end, NTT jobs in the middle of nowhere are now highly competitive positions, attracting amazing candidates from all over.
    2. Philosophers are not ashamed to publicize a national search for a job offering such shitty terms to fellow members of our profession.
    To get this back on topic: This is just more evidence that comparative placement data is actually much less interesting than overall placement data. Except for those rare departments that improve placement by creating jobs for their graduates, all that emulating (comparatively) successful departments is going to do is increase competition, even for shitty, dead end positions.

    Like

Leave a comment