I recently signed a pledge with the aim of being more respectful toward my colleagues and of trying to uphold a culture of respectfulness in our profession. Following conversation over a previous post, I have come to the belief that I should provide a safe space for people to discuss departmental rankings in philosophy. When I made critical comments at the Leiter Blog on the inclusion of women among the rankers of the PGR in 2011 as a graduate student, I felt shut down. My comments were edited without permission in a way that made me appear less reasonable, while the original post and other comments were edited to make my interlocutors appear more reasonable. I think that it is healthy to evaluate ranking methodologies critically and openly and I think that there must be a public space for this. Since I have already earned the ire of those who appear to be opposed to a public discussion, I am a good candidate for putting forward a post that will allow for discussion. I will thus allow anonymous postings and will aim to respect that anonymity both privately and publicly (except when required by law or conscience to do otherwise).

I will start with some of my own thoughts: I think that reputational information is helpful and important, but that it would be better to combine this information with data on placement, publications, and other such objective measures. (With this in mind, I sent my original findings on the job market to Brian Leiter and Kieran Healy in April 2012 without response.) An ideal ranking, in my mind, would be customizable. The viewer would have to choose metrics before a ranking would be created. I am open on what the relevant metrics might be. This is where you come in. Should we have rankings at all? What metrics do prospective graduate students care about (a variety of voices is of value here)? How should this work be completed, and by whom? Comments that appear to violate the norm of respectfulness will not be admitted as is, but anonymity is both welcomed and encouraged. Update: commentators should feel free to leave off their email addresses when posting comments. 

Update: Creating (or updating) a ranking of this kind, with multiple objective values, is beyond my current capabilities. I fully and wholeheartedly welcome someone with more time and competence than me to take on this task. Better yet, I think, would be a task force involving those familiar with the PGR, since they already have lots of expertise. I am welcoming discussion here not because I plan to create a new ranking, but because I think it is important to have a discussion about all such rankings in the open. I am limiting my personal contribution to the placement data for now.

Posted in

51 responses to “An Ideal Ranking”

  1. Rachel McKinnon Avatar
    Rachel McKinnon

    I’m very glad for your work and look forward to our colleagues’ help in making things better.
    Perhaps I’m in the minority, but I wish our profession would be less obsessed with ranking things. But given that that’s the state of things right now, I agree that a customizable ranking would be useful. E.g., I can imagine that people would like to know which programs have had a good record placing candidates in the sort of jobs prospective students would like.
    I would like to note, though, that we should also be working hard to oppose the view that only 2-2 jobs at R1 research institutions are valuable and desirable. Moreover, prospective students should be open to changing their minds about what sort of employment they would like after completing their PhD. One might go into a program thinking that they only want to be in a research-focused position, but come to realize along the way that they’d prefer a more teaching-oriented position.

    Like

  2. Eugene Avatar
    Eugene

    I just don’t get how, especially in 2014, anyone could have an inflated view about the significance of faculty reputation for placement, since (a) there is no relationship between a department’s high reputation for research and a department’s success at training philosophers to be teachers, and since (b) the majority of jobs available these days (all days?) are teaching-oriented. What exactly is the use of a survey that only assess only faculty reputation?
    (I hope you don’t mind a bit of cross-commenting. I posted this same question over at daily nous.)

    Like

  3. Stacey Goguen Avatar
    Stacey Goguen

    Thanks for doing this!
    Because there are lots of different things that people in philosophy want, I would think that a database with more of the raw measurements, and less interpretation of those measurements (in the form of rankings or indirect labels like “placement record”, etc), might be a good idea. I like Carolyn’s idea of a customizeale database that would only generate rankings once you’ve specified some metrics.
    Also, to pick up on Rachel’s wish for less interest in rankings, I know the idea of “rankings” seems more anachronistic to me the more time I spend in philosophy. I’ve just been struck by the complete variety in what people think counts as a good program, a good teacher, a good scholar, a good journal article, and a good intellectual community.
    In particular, rankings for the whole profession seem a little weird to me when, we tend to separate ourselves out into various subfields and circles of conversation. The standards within those subfields are much more important to us than any general standards we have as a profession. Someone even AoS is way too broad a division. For instance, formal epistemologists and social epistemologists seem to have very, very different standards for what good epistemology scholarship (and good philosophy in general) looks like. Articles that are held up in social epist. as ground-breaking will probably not even make a blip on most formal epists. radars. I know it’s definitely true in the other direction.
    Of course, more collaboration between subfields would probably be a good thing; but, in terms of measuring what we as a profession find valuable and impressive, I find general rankings for the whole profession to be fairly useless for many of us. For instance, I was reading those reports on the philosophers who are cited most in the “top” journals of all of philosophy, and I realized that I didn’t know most of those people.
    I certainly had no idea that David Lewis was such a big deal, because I’m pretty sure I’ve never read anything by him, or ever will need to in order to have a successful career. (That’s not to knock Lewis, but just to point out that the big dogs in one area can be absolutely inconsequential to another area of phil.)
    And I am torn on this, in that, many of us want to resist the hyper-specialization of academia. But specialization is currently the state of our field, so if we want useful rankings (or just databases of information), comparing “programs” as a whole, or their reputations in general, seems largely unhelpful.

    Like

  4. HK Andersen Avatar
    HK Andersen

    Others, and myself, have said this elsewhere, but it bears mentioning again. For those of us who come from far outside the world of academia, with little or no academic guidance in applying for PhD programs, some kind of rankings system is vital. It is the only source of information we have on what counts as a good grad program, what one might be looking for in a program, etc. I had no idea what schools even had programs in my field of study when I first decided to pursue a PhD, and had no one to discuss this with. Having some ranking system doesn’t level the playing field entirely, but it does give some people a really helpful boost that they could not otherwise get. Available information, even if it is imperfect, furthers egalitarianism in terms of who gets to even apply to grad school.
    With that in mind, it might be helpful to find parameters that suggest to someone who has no other source of information what kinds of features might be useful. Some students may not even know what to consider in choosing a school, and the parameters are a useful way to guide them.
    I agree that specialty rankings are quite important. But there is also something to be said for general rankings (not that reputation is necessarily the way to go on this). Many students apply to grad school with a certain set of interests; however, lots of students then find that their interests in those subjects runs out, and new interests arise. It is hard to commit, as a senior in college perhaps, to a speciality for a grad program, thereby limiting oneself to whatever happened to draw your attention during that year. There is something important to be said for programs with general breadth, such that whatever your AOS ends up, you also learn a lot of other interesting and well-done philosophy.

    Like

  5. Grad student Avatar
    Grad student

    Thank you, Dr. Jennings, for this work. I think it will be immensely valuable to future prospective graduate students.
    You asked for metrics that prospective graduate students might find useful, and as a current grad student myself, I think I can offer some insight.
    1. It would be great to have some information regarding the degree to which faculty members at an institution are willing/able to work with graduate students in that department. Obviously this information would be extremely to difficult to obtain, and even more challenging to interpret. Nonetheless, assuming one could garner reliable, comparable ratings of faculty-grad (constructive/helpful/edifying) interaction at various departments, I suspect this would be one of the more important metrics to incoming grad students.
    2. Information about funding and teaching requirements(/expectations) is typically unavailable to prospective grads prior to their admission into a department. For prospective grads who already have a teaching v. research preference, this seems unfortunate; for prospective grads who will depend on their funding for basic living expenses, this seems even worse. If the funding metric included information about departments’ willingness/ability to fund graduate travel, it would, I think, represent one of the most important metrics for prospective grads who want to emphasize research. Again, I recognize the difficulty with obtaining this information, but it would doubtless be useful.
    3. Any information, whatsoever, about matriculation and graduation rates would be of obvious utility.
    Thank you again for all the time and effort you’ve put into this. I think the information you’re already providing is important and very useful, and I find it deeply regrettable that you’ve received overly-harsh and misdirected criticism from some within our field.

    Like

  6. Ed Kazarian Avatar

    I’ll bite.
    I am pretty strongly opposed to rankings in most circumstances. I’m certainly profoundly opposed to them where journals, presses, and so on are concerned — and I’m generally horrified by the proliferation of institutions built around these imaginary merit-hierarchies. That certainly includes ‘reputational’ rankings of faculty quality, and I’m pretty sure any rankings of faculty quality in graduate or undergraduate programs. I simply can’t imagine a non-biased standard for such things, or a use of them that doesn’t function to protect and enhance the power of this or that in-crowd.
    The only exception to all of this is something like what Carolyn’s trying to do here, namely come up with a ranking based on solid, objective data, concerning graduate programs’ ability to successfully professionalize philosophers. For me, nobody going to graduate school should not have both the raw information re: what the outcomes have been for students in the programs they’re considering attending, but also a clear sense of where those programs stand in relation to others.
    I also think this needs to include a lot more than just placement. At bare minimum, I’d want to know dropout rate, time to completion, debt load, rank and salary of jobs obtained, etc. And the comparative piece for me needs to be there b/c students need to know exactly where things stand both across the board and for those who attend programs they’re considering.
    As others have said, the idea of a somewhat customizable database, along with rank-ordered lists based around the key elements here (but also breaking them down individually) seems like a great idea.
    I also think this is something that should eventually be an APA project–and that participation needs to be mandatory for all graduate institutions that want to be in good standing with the APA. Not to diminish the work of anyone who has taken it on themselves to try to improve the transparency of this stuff and shine a genuinely critical light on some of the more abusive corners of graduate education, but complete data, hosted and maintained by an authoritative body with as few possible conflicts of interest as possible seems rather important at this point.

    Like

  7. BLS Nelson Avatar

    I very much support the attempt to develop a system of rankings, though would prefer to specify my own criteria. e.g., I am kept up at night by the possibility that departmental rankings have a tenuous connection with what is happening in the literature. It is an interesting fact that the world of letters can sometimes be disconnected from word of mouth, at least to some degree, and we can expect that this is only going to get worse the more that the field breaks into sub-specialties. I am enthusiastic to see the implementation of any measures that would help to revitalize the connection between literature and common knowledge.
    That said, there is some reason to think that your research inclinations are justified. Interesting as it was, Healy’s survey was limited to four journals (now infamously known as the “Healy four”). By narrowly circumscribing his data to just those sources, the results cannot tell us a general story about what is going on out there.

    Like

  8. Gordon Hull Avatar

    I’ll confine myself to two points:
    1. Carolyn’s data, whatever flaws it may have, is picking up on something that those of us who graduated from places like Vanderbilt have known for a long time – there are schools that the PGR doesn’t even rate that nonetheless have respectable placement records. Reasonable people can disagree about why this is, or about appropriate methodologies, but one virtue of more than one rating system is that factoids like that one get brought to light.
    2. I have a lot of reservations about rankings, or at least over-reliance on rankings (and I don’t want to take away from HK Anderson’s point, so this comment is a guarded one). This has all been hashed out repeatedly over the issue of law school rankings, but when deans and/or department chairs take these things too seriously, you end up producing departments that are more and more like each other, since the way to climb in the rankings is to hire people who are well-regarded according to the same general criteria that cause people at the top to be well-regarded. Additionally, whatever narrowing of the discipline this nudges, it also causes an arms race where rich departments compete for lateral hires. So being at the top is also correlated with departmental wealth. I’ve been in the world a while, and I know that being in a rich environment has lots of benefits, but there needs to be serious consideration of the point at which this sort of arms race damages departments in the financial middle.
    At the very least, these sorts of considerations urge that different kinds of rankings are a good thing.

    Like

  9. AnonGrad Avatar
    AnonGrad

    I not sure about others, but I’d be very interested to see placements broken down by AOS, dissertation supervisors, dissertation topics, etc. Maybe even along demographic lines like race and gender. Non-academic outcomes for graduates and those who leave the program would be interesting as well.

    Like

  10. anonn Avatar
    anonn

    Apropos these comments [1, 2] yesterday concerning making personalized and different ranking schemes, I wrote a program that “ranks research output of philosophers based upon publication venues. For each person or list of people (a faculty), the journals where they have published is extracted from a Bibtex file. This file can be local or automatically downloaded from the PhilPapers archive, http://philpapers.org/. Then the publications are evaluated according to a ranking system. The ranking system is determined by the user, so if the user is interested in the ethical research output of a person or department, the user can increase the value of publications in ethics journals and decrease the value of other journals.” The system is not comprehensive or error tolerant, but does provide a quick, customized ranking that can be used as a starting point for further research.
    The links below are all the files I used to develop the program. Dr. Garrett’s bibtex file was downloaded from Philpapers.org, the Harvard faculty list was taken from their department webpage. I am not associated with Dr. Garrett, Harvard or Philpapers.
    The program:
    http://slexy.org/view/s2bh8KkIQp
    The arbitrary rankings:
    http://slexy.org/view/s2iFH9YUZR
    Harvard Philosophy Faculty List:
    http://slexy.org/view/s21IDFtd83
    Dr. Aaron Garrett’s PhilPapers generated Bibtex file:
    http://slexy.org/view/s2EYequUVg
    The output using the Harvard list and arbitrary rankings:
    http://slexy.org/view/s2v6nFNRck
    The output using Dr. Garretts local bibtex file and arbitrary rankings:
    http://slexy.org/view/s21FUJ9rfj
    Python: https://www.python.org/
    BibtexParser python library: https://bibtexparser.readthedocs.org/en/latest/
    [1] http://www.newappsblog.com/2014/07/job-placement-2011-2014-comparing-placement-rank-to-pgr-rank.html#comment-6a00d8341ef41d53ef01a511d90b86970c
    [2] http://www.newappsblog.com/2014/07/job-placement-2011-2014-comparing-placement-rank-to-pgr-rank.html#comment-6a00d8341ef41d53ef01a73de50d6a970d

    Like

  11. Anonymous Uk-Based Graduate Student Avatar
    Anonymous Uk-Based Graduate Student

    I’ll repeat some of what’s been said above and add a couple of new points. Though it looks like I’m occasionally saying things inconsistent with some of what’s above, I don’t think I disagree with any of it; rather I’m going to add some considerations which also need to be borne in mind in an ideal ranking.
    Ok. So first, I’m doing a PhD in a UK philosophy department that does not do especially well on the PGR. Previously, I did a Master’s in a department that did brilliantly on the PGR. The differences and similarities between my experiences surprised me. (1) The philosophical acumen of the faculty in each department is brilliant, and equally brilliant. (2) Their acumen does not matter very much at all. This is because (a) they are not always good teachers, as has been noted; but it is mainly because (b) most of what I learnt (90%? 95%) I learnt from my fellow students, from postdocs, etc., in social contexts. And it is also because – and this point is particularly relevant and I’ll come back to it in the next paragraph – (c) as a PhD student, you only deal at any length with one or two academics, and if they are good, it hardly matters what the general philosophical acumen of the department is. (3) The acumen, work ethic, general knowledge, wisdom, diversity of every sort, you name it, of the students – be they undergraduate, Master’s, PhD, or even those not quite students, ex-PhDs hanging around that you see a lot of, etc. – in the highly-ranked department was far higher than in my current department. (This is why I writing anonymously.) It’s uneven of course, and even in the highly-ranked department there were only a handful of people who really changed my life by what I learnt from them; but it is still the case.
    Because of 2(b) and (3) – that is, because you learn most from your fellow students, and it seems that they really are more philosophically astute in good departments – it seems that the PGR does indeed track something worth tracking. But because (2(c)) as a PhD student you don’t need this community so much – especially if you’re working on something very specialised – what the PGR tracks is of very limited use when applying for a UK PhD. What it is useful for is deciding where to apply for a Master’s. And this is fine, but not what it’s advertised as doing. From a U.S. point of view, where Master’s are part of PhDs, the situation of course looks different: going into a Master’s, not knowing what you’ll end up researching, it’s good to be in a big, broad department where you can research more or less what you want and still have a supervisor. But the utility of the PGR – and this is something that has to be borne in mind for all rankings, of course – is very different either side of the Atlantic, and this needs to be made explicit.
    Now we have to ask (sorry, I’ll stop soon!): WHY is it that the highly-ranked department attracts so many good students? In my experience, it was either (a) the general reputation/prestige of the university, or – what was much much more common – (b) because of the PGR. So the PGR is useful only because it’s there! If it weren’t there, some of these students would go anyway because of the university’s name, but most, I think, would decide in a much more informal way and end up in all sorts of places. It’s not obvious, mind, that this would be good for philosophy – the concentration of bright, keen students who are nervous and humble because they’re in such a highly-ranked place had a bootstrapping effect in my experience – but it’s something which is worth noting and worrying about.
    This is related: The people who take rankings like the PGR most seriously are prospective graduate students, often just teenagers. I’m not against a rankings system, but unless we want to encourage all the best students to go to one place just for the sake of concentrating them somewhere, its limits – of scope, of objectivity, etc. – must be made extremely clear and prominent, because teenagers – especially those prodigies who find themselves in top departments – are often pretty stupidly obsessed with that sort of prestige.

    Like

  12. Carolyn Dicey Jennings Avatar

    This is good work. Thank you for sharing! It could definitely be useful in the future.

    Like

  13. Aaron Garrett Avatar
    Aaron Garrett

    I am both first and last in my rankings of me! That seems about right.
    Aaron

    Like

  14. Carolyn Dicey Jennings Avatar

    Some of this information is available in the placement data (http://www.newappsblog.com/2014/07/placement-data-2011-2014.html), but not by department. Look at the “Overall Trends” tab and let me know if that is what you were thinking.

    Like

  15. Derek Bowman Avatar
    Derek Bowman

    I guess I think having ‘rankings is better than attempting any sort of comprehensive master ranking. Even better, perhaps we can be careful to think of such rankings as rankings of properties of departments, rather than of departments themselves.
    So, e.g. instead of having a ranking of ‘departments according to placement rate’ we could have a ranking of ‘placement rates of departments.’ And similarly for rankings of ‘reputations of departments.’
    For reasons that I imagine are shared by those skeptical of rankings, I just don’t think there’s any interesting quantity of ‘overall department quality’ that all of these particular rankings could be balanced into. So I don’t see the recent placement rankings as competing with the PGR, since they’re not trying to measure the same thing.

    Like

  16. Anonymous UK-based graduate student Avatar
    Anonymous UK-based graduate student

    Two clarifications: In saying 90% of what I learnt was from my colleagues, I was thinking of what I learnt from them rather than from faculty. I forgot to include my own work: writing essays, reading articles, etc. It goes without saying that I learnt a lot from this. But it’s also largely irrelevant, in that I could have read those anywhere. If I read more in that Master’s than I would have in another programme, that’s because of peer pressure much more than anything faculty-led.
    Second, I should stress the complexity and looseness of my comparing the graduate students in my PhD and Master’s departments. In many ways, my current colleagues are better-adjusted, have broader interests and more interesting histories; it is in most of the important ways silly, offensive and unhealthy to compare them. But with this caveat made…

    Like

  17. Derek Bowman Avatar
    Derek Bowman

    Prof McKinnon,
    Apologies for the digression, but I think this is an important point.
    You say,
    “[W]e should also be working hard to oppose the view that only 2-2 jobs at R1 research institutions are valuable and desirable. Moreover, prospective students should be open to changing their minds about what sort of employment they would like after completing their PhD. One might go into a program thinking that they only want to be in a research-focused position, but come to realize along the way that they’d prefer a more teaching-oriented position.”
    I don’t disagree, but I would just note that it’s important than, when we do this, we don’t give graduate students the mistaken impression that there are an abundance of teaching-oriented positions waiting to be filled, if only they can get over their R1 envy. Full-time, living wage teaching positions are highly competitive, and many (most?) of them have research requirements as well. I know you know this, but I know all too well how tempting an inference it can be from ‘teaching positions are undervalued’ to ‘I can get a (nonadjunct) job as long as I’m comfortable with the idea of high teaching load.’

    Like

  18. Carolyn Dicey Jennings Avatar

    This perspective resonates with me. I didn’t know what I was doing at all when I first looked into graduate schools. This is one reason these questions matter to me. As I have said elsewhere, I started my research with the PGR specialty rankings on the advice of another student. Those rankings were very helpful to me then, since I did my undergraduate work in the U.K. and was fairly certain I wanted to concentrate on philosophy of physics in the United States, but I just knew nothing about who was doing philosophy of physics and where. I am very happy with how my education went, in part because of a number of very supportive pluralists at Boston University, such as my main advisor, Dan Dahlstrom. But when I look at the specialty rankings today, they do not seem particularly helpful, given my current interests. I would have a hard time knowing where to study if I had to choose again, even knowing as much as I do about the field.

    Like

  19. Carolyn Dicey Jennings Avatar

    All good points, thank you. Perhaps surveying current graduate students will turn out to be an essential component.

    Like

  20. Carolyn Dicey Jennings Avatar

    I think the APA is a great suggestion here, although they seem to have their plate full for now. I could also see the PhilPapers project turning to metrics like this, but I am not sure.

    Like

  21. Carolyn Dicey Jennings Avatar

    I also think of my education as having been very largely supported by my peers. But people do gather at places for different reasons. Many of the graduate students in my department chose it because of how serious it was about the history of philosophy. I did not choose it for that reason, but I am very glad now to have been in such a place. I think it helped me to become a more generous reader, for example, which is a skill that I had previously lacked.

    Like

  22. Anonymous UK-based graduate student Avatar
    Anonymous UK-based graduate student

    Thanks for the response. You’re right, of course; these are also reasons. My experience is that these were much less common, but maybe my experience is unusual. And it could be different either side of the Atlantic (your PhD is Boston University, is my Google-fu good?). A Master’s is a much smaller commitment than a PhD, so people go for weaker reasons. Why people choose one programme over another is something worth looking into, in a world where we’re creating an IDEAL rankings.

    Like

  23. Carolyn Dicey Jennings Avatar

    Good point. Another reason to survey graduate students.

    Like

  24. Ex-Prospie Avatar
    Ex-Prospie

    There are two main times when rankings matter to a PhD applicant: (1) when deciding where to apply, and (2) when deciding where to accept. I think rankings are more useful, and influential, with respect to (1). Students who get into a lot of places will usually be able to visit, and visits allow for all kinds of epistemic access that rankings don’t. (One often overlooked source of info: comparing experiences with other prospies.)
    In fact, I think that rankings tend to hurt rather than help students when it comes to (2). The PGR is rough, it fails to track quality of life and placement, and the ordinal ranking exaggerates differences between programs. Worst of all: the numbers “anchor” students opinions of the department, scaring them off from overperformers and seducing them towards the underperformers.
    CDJ’s work has a lot of potential, and I think she deserves praise, but right now her ranking risks having the same flaw: anchoring students, influencing them even when it shouldn’t. If I’d looked at her ranking last April, I probably would’ve had a harder time with my decision to go to [University X]. Their record looks great when you see what departments are hiring their grads ([top institution], [top institution], [top institution], [top institution]…), but it’s not so hot when you look only at the TT-rates as CDJ does.
    The upshot is that we don’t always ignore rankings when we should. Even when our understanding of what’s being ranked outstrips what’s covered by the ranking, the numbers are seductive. There may not be a good way for rankers to prevent this, but it’s worth bearing in mind, and it’s worth making explicit to PhD applicants.
    Note: I (CDJ) edited this for the sake of maintaining anonymity by adding “[top institution]” in place of listed institutions.

    Like

  25. 5th year Grad Student Avatar
    5th year Grad Student

    # of grad students each Big Name prof Chairs, annually
    -grad student attrition by gender
    -placement
    -male v. female faculty ratio
    -ethnic diversity stats
    -male v. female grad student ratio
    -time to completion
    -dept does/does not collect grad student satisfaction data
    -dept does/does not provide Title IX training to faculty and grad students
    in reaction to the current hiring environment:
    -Dept does/does not provide pedagogy training
    dept does/does not have dedicated $ for travel
    -dept does/does not provide job search $ resources (individual interfolio acts)
    and only slightly tongue in cheek:
    -is/is not under Title IX investigation/engaged in active litigation

    Like

  26. Bharath Vallabha Avatar
    Bharath Vallabha

    Thanks for opening a space to talk about this issue.
    One thing I have often wondered is: why hasn’t the upkeep of the PGR rankings been transferred over to the APA? Who better to capture neutrally the quality of philosophy departments than the organization which purportedly represents all philosophy departments?
    Perhaps the obvious reason this hasn’t happened is because the APA is far too pluralistic. Imagine all the philosophers who show up at an APA meeting. Are their conceptions and conditions of doing philosophy similar enough that they can all agree to some common metrics for ranking themselves? Highly doubtful.
    But this is the illusion which the PGR rankings foster. If I tell myself, “I got my PhD at a top department,” I can feel the temptation to interpret this as: “not just me or the people in my graduate department, but everyone in the know thinks this, and so is something objective.” Instead of confronting with openness the true pluralism that exists among philosophers, the rankings give a false satisfaction that there is already a unity among the plurality. But it is a unity which hasn’t been earned and created from below through having the hard conversations, but imposed from above by those in a position to do the imposing.
    I don’t understand the hundreds of philosophers who fill out the evaluations of the PGR rankings, or those who are on its advisory board. Presumably they do it because they want to be of service to the profession. But how do they feel comfortable contributing to a rankings which aims to speak for the profession, even though it doesn’t happen through the APA? Or is the assumption that the PGR does the de facto quality control and evaluation work which the APA, committed as it is to political correctness, cannot do?
    I like the idea of customizable rankings, because it would highlight the distinction between pure facts about departments and the halo of the unified professional voice which the PGR fosters. It is worth asking: Who does the PGR speak for? How is it related to the APA? How did the PGR acquire the status it has in the profession and is it entitled to it? The more rankings there are, the more these questions can be discussed.

    Like

  27. Carolyn Dicey Jennings Avatar

    All good points. What do you think about the brackets, in place of a numbered rank?

    Like

  28. anon Avatar
    anon

    Here are three important, objective measures that could feasibly be gathered (without endless emails or surveys) on a single site in time for grad applications next year. These measures could then be customizable (weightable for significance) by grad applicants in accord with their priorities.
    1. CDJ’s placement data: it is very helpful now, and will only become more and more helpful as she continues to collect data (as Marcus Arvan has pointed out on his site).
    2. Percentage of women on faculty: Julie van Camp has already collected this data, so it is almost a no-brainer to use it. http://www.csulb.edu/~jvancamp/doctoral_2004.html
    3. Citation data: this is perhaps the most controversial measure, but it is meant as a more objective attempt to capture the PGR’s “quality/excellence/influence of faculty” idea. I am not saying that citation data perfectly captures quality or excellence of faculty; what I am saying is that it is a better measure than we currently have in the PGR report, where reviewers look at lists of faculty and then largely guess how influential many of the philosophers are. (See the recent dailynous discussion on this.)
    Prof. Jonathan Kvanvig (Baylor) collected citation data a few years ago using a variety of measures, so it is not impossible to acquire. http://certaindoubts.com/complete-hirsch-number-rankings-of-us-philosophy-phd-programs/

    Like

  29. r Avatar
    r

    I would add an nth voice of support for rankings, for a similar reason. As an undergraduate I knew I wanted to apply to graduate school, but knew almost nothing about the contemporary scene. The group of philosophers I read in college was 1) not very broad and 2) mostly dead. So it’s not like I had much idea where the current research action on various topics was. The PGR was hugely useful to me.
    Similarly, I worry that attempting to do away with any ‘central’ ranking, in favor of a bunch of sophisticated search tools addressing micro-features, will have the result of making the ranking less accessible to the people who stand most in need of it, aka people like my super-ignorant younger self who weren’t even sure what the right questions to be asking were.
    Nonetheless, I am strongly in favor of at least some changes: namely, making both placement and time to/completion data into ranked categories, because they are of obvious and overwhelming interest to prospective students. They are also, at least in my experience, very hard stats for prospective students to collect on their own, since as late-stage students drift out of regular department life it will often be that only a few people will know exactly how many of them there are or what has been happening with them.

    Like

  30. Bharath Vallabha Avatar
    Bharath Vallabha

    Several people (such as HK Anderson at 4 and r at 29) have said how useful ‘central’ rankings can be. I totally agree, and for the reasons they give, such as that it gives people thinking of grad school and jobs a sense for the landscape and what is happening in the various departments.
    It is worth highlighting that this is not a justification for the PGR as it is now, disconnected as it is from the APA. Surely the same people who now fill out the surveys for the quality of departments for PGR can do the same thing for some rankings which are maintained by the APA. In fact, the latter is more fair, because then if someone has a complaint about how the rankings are structured, they have someplace to go to with their complaints or suggestions for improvement; that is, they can go to the APA or organize to get their voices heard. One of the really strange things about the PGR is that because it grew out of one person’s sense for the quality of departments, there is no institutional sense in which one can object to it, nothing one can do if one is frustrated by it; it is just that person’s view of things, and one can take it or leave it. Even though there are now many philosophers associated with the PGR, the structure is still the same; it is those people’s view of things, and one can take it or leave it.
    The fact that the PGR is useful shouldn’t cover over the fact the PGR as it is now enforces a kind of taxation without representation. Everyone in the profession has to deal with it, though not everyone gets a say in it. This is why it should go to the APA or some other such place which could at least enable some structure for everyone getting a say in it. The rankings then would still be out there so that people can use it and be helped by it.

    Like

  31. Rachel McKinnon Avatar
    Rachel McKinnon

    Well certainly, and I’m currently happily employed in a sort of “in between” position (3-2 teaching load with some research expectations) at a large liberal arts college (with no graduate program).
    I never meant to suggest that, e.g., 4-4 jobs are easy to come by. Not in the slightest: any TT employment is very hard to come by. This isn’t so much about what job-seekers should be thinking about teaching-intensive positions. My comment is more about what the rest of us (particularly those with prominent blogs focused on the profession) should think of such positions. All too many view those in 4-4, 5-5, etc teaching intensive positions as second class professional citizens, even to the point where some people think they couldn’t make it: they’re not good researchers, so they had to “settle” for such jobs. And it’s that attitude that I reject. I think it’s damaging to the profession and, more importantly, members of the profession.

    Like

  32. Susan Avatar
    Susan

    I agree that rankings can in theory provide a useful, time-saving overview, although they should not replace more detailed analysis of a program’s offerings. Additional data and a better system of rankings has been sorely needed for many years now. I am delighted by the prospect of a ranking that provides the most accurate information on all programs concerning a wide variety of criteria, and allows users to sort the list according to any factor. I would like to see information about placement that could be broken down according to kind of position (TT/Non, Grad/Undergrad, Full/Part-time) and sorted for a particular time period, as well as information about non-philosophy placements. I would like to know the numbers of full-time faculty, visitors, courtesy or double appointments, along with their stated areas of specialization. It would be helpful to include the total number of grad students, number of applicants, number accepted, fellowships and stipends offered along with teaching requirements, number of years of funding available, number of student publications, number and type of faculty publications, and so on. I would be happy to donate to the development of such a database; I believe the APA or some other professional philosophy organization should fund creation of such a database and I have been puzzled for years now why it has chosen not to solicit and publish such information from programs that agree to provide it.
    I do not believe that rankings based on subjective determinations of reputational quality have much positive value; however, they can easily do unintended damage when people rely on them in judgments about which grad school to attend or which school’s job candidates are more impressive. More than one such system of rankings presently exists in philosophy, so I am not referring to any one of these in particular; many examples of a similar approach may be found in other disciplines and professions. I find all of these equally lacking for one simple reason: these rankings may readily perpetuate the confirmation biases of the raters. The best way to combat this is to gather a wide range of objectively verifiable information about the programs, so that individuals may sort and weigh it according to their own values. Some people wish to be part of a very large department; some people are not eager to land a job at a Ph.D. granting program; some people wish to be at a program where at least three full-time professors list Philosophy of Language as a specialty.
    But how would the prospective student know which program in philosophy of language had the best reputation based on the rankings, then? In short, they would not, and I can see no good reason why they should. The more important information is how many professors teach in that area, what those professors have published, how many grad students they have, how many of those grad students obtain jobs of different kinds, and what else is true of the program that might affect the quality of the student’s education and prospects for future employment. If all advanced-degree granting departments could be sorted according to several such objective measures, a prospective student would have ample information to narrow down a list of schools for detailed investigation and application. I do not think it would be more helpful for a student to learn whether a limited number of professors from a limited number of Ph.D. granting institutions are favorably impressed by the faculty at other such institutions, because that is only one subjective piece of potentially relevant information about a grad program, tailored to a narrow and pre-selected purpose. Other rankings systems could provide that sort of information – let a thousand flowers bloom! For my part, I’d prefer to have a wider range of information without pre-judgment concerning the importance of categories.

    Like

  33. Ex-Prospie Avatar
    Ex-Prospie

    Brackets are less fun than ranks, but I think brackets would be a great idea! I can’t think of a downside to them. (Maybe they’re less fun.)
    Also, since I didn’t say it in my original post, thank you so much for what you’re doing. You’re a credit to philosophers everywhere. 🙂

    Like

  34. philosopher Avatar
    philosopher

    The APA should not participate in the ranking of graduate programs. The APA is supposed to represent the discipline as a whole. A ranking of programs issuing from it would be detrimental to its credibility as an advocate for the discipline as a whole. Many people on the web seem to assume that the answer to all our problems in philosophy is to get the APA to do it, whatever IT is. It is far more sensible to have the APA focus on the sorts of things it can do well, and that will not undermine its credibility as a representative of the discipline.
    I am ambivalent about rankings in general. Of course they are happening all the time, formally and informally. I am asked by students to recommend graduate programs, and I tell them X is better than Y. I am asked by junior colleagues to recommend journals in which to publish, and I tell them A is better than B. The discipline would profit from having more than one type of ranking out there. We should not aim for a definitive ranking, of programs, journals, philosophers, etc. I. Berlin told us the goods are incommensurable. Different sorts of rankings, based on different sorts of considerations, will serve different people who will inevitably have different interests from each other.

    Like

  35. anonn Avatar
    anonn

    I was thinking about different possibilities for the PhilPapers data. First, if some dedicated individual(s) were to curate the CV downloads for all the different philosophy departments — with the blessing of the PhilPapers editors — then we would have a reasonable, normalized starting data set.
    From that database we need not merely rank the output of philosophy departments by venue. We could track departmental publishing trends by correlating years and journals, pluralism within departments by showing the variety of publishing venues, pockets of specialization and expertise by looking at most publications in the smallest variety of journals, and perhaps even research efficiency by calculating the number of published articles to philosophy department size ratio (teaching load notwithstanding).
    Correlating different ranking systems, we could develop a hype-to-reality metric that tracks the departments with low PGR but with a good publishing record, and, conversely, a high PGR but poor publishing record. Likewise we could have a research-to-jobs metric: those departments with the largest gap between research output and landing their students philosophy jobs.

    Like

  36. Susan Avatar
    Susan

    Philosopher #34, I agree with what I take to be your main point about the APA. I want to clarify that I am not calling for the APA to “rank” programs, where ranking involves any determination of the value of different programs. Rather, I would like them to collect detailed information about the programs and publish it in a custom-sortable database. The user has to bring his or her own judgments about the relative value of those factors to bear on interpretation of the information. I don’t think this would undermine its credibility in any way. Or perhaps the collection and publication of such data could be a future project for PhilPapers or some other frequently-used philosophy website.
    As a rough parallel, please consider the guide to ABA-accredited law schools: https://officialguide.lsac.org/release/OfficialGuide_Default.aspx . This is an extremely useful tool for prospective law students, though alone it is no substitute for eventually visiting or making further inquiries about schools on an application short list. Many other entities publish law school rankings. However, the one instrument that provides the greatest value for students is the ABA-guide, and I know of nothing comparable to it in philosophy. I would think departments have a strong incentive to share this data. I take what Carolyn has done to be a portion of a larger potential project that could develop in this direction, starting from the assumption that actual placement data is a more valuable guide than other measures of program value to students. I do not believe that placement by itself is more important than other factors, which is why I would like to see it included in a larger compilation of information about department size, faculty publications and areas of specialty, and so on. Many prospective philosophy students don’t know what they might like to specialize in (which I consider a good thing – to be open to many possibilities, including sub-fields one doesn’t know much about yet), but it is very important to know how many professors at a given school are engaged in teaching and research in the students areas of interest.
    If I had a student interested in post-graduate study in a particular area of philosophy, I would most want that student to have a list of which programs have the most faculty in that area, and of those programs, which ones have the best placement record. The next questions would be how competitive the program’s admissions are, what financial support package they offer to grad students, and what the student-faculty ratio and size of the program are. I can see an important role for information about faculty publications and perhaps grad student publications. Can anyone mount a argument to the effect that this would not be the most valuable possible combination of data available? If I’m missing something I would like to hear the counter-argument. However, I believe what I have described here is the best way of presenting information about programs to students, and I note that it requires no faculty judgments about their peers at other schools or subjective rankings of reputation or quality whatsoever. If anyone can demonstrate to me that such information should be more important to prospective students than what I have described, please do. A counter-argument would have to account for the shortcomings of subjective faculty rankings. Let’s put it this way: if this were a Critical Thinking exam question, I know which sampling method and results would be subject to greater criticism.

    Like

  37. Lisa Shapiro Avatar
    Lisa Shapiro

    A few disjoint thoughts:
    1. Economics has a quite well-established disciplinary ranking system that is based (unsurprisingly) on objective measures involving publications and citations, among others. This system involves explicit disciplinary norms about relative journal prestige. I believe that this is the RePEc rankings. The AEA has a webpage with a range of different rankings systems: http://www.aeaweb.org/gradstudents/Rankings.php I don’t think this is the way to go for Philosophy for a variety of reasons, but one thing about economists is that they think really carefully about their measures and articulate them. It can be helpful to look at how other disciplines do things.
    2. While the Philosophical Gourmet rankings are meant to be aids to students applying to grad school, they get used in all kinds of other ways. In particular, they are used by departments to make cases about departmental prestige to Deans, et al. I don’t want to comment as to whether this is good or bad, but it does mean that one needs to be aware of how rankings can be used in devising rankings.
    3. One unfortunate consequence of philosophy rankings being largely based on research reputations is that students (and faculty) can lose sight of the importance of teaching to the flourishing of philosophy departments in a university context. Philosophical research may be valuable, but it rarely brings in grant money, partners with business (applied science) or influences public policy in concrete ways (economics). Philosophy is valuable in the university because of its traditional place as a humanistic discipline and the particular kind of critical perspective it can allow students in other disciplines to assume (think of the way science students benefit from philosophy of science courses). I don’t know how any ranking system can counter this trend.
    4. In reading over these thoughts it can seem as if I am opposed to philosophy rankings. This is not the case. I think that rankings can be very helpful, for instance, and as others have pointed out, in providing advice to students interested in philosophy at places where they aren’t afforded any kind of sensible advice. I also think some kind of ranking system is helpful as arguments get made to administrators for scarce resources. But at the same time, we all could stand to be a bit more reflective about what any ranking system is really telling us.

    Like

  38. Martin Shuster Avatar

    Carolyn, above all, thank you very much for all of your work. It is valuable stuff and I am sure it is not easy. So, thanks.
    On the point of rankings: I think there might be a need for rankings based on placement (based on objective criteria: was this person placed, and if so, into what sort of position, etc.), but I cannot, after all these years, see the necessity of rankings based on other criteria. Especially in philosophy, one person’s trash is another’s treasure. Furthermore, I think it is obvious that, on the whole, even though perhaps initially (non-placement based) rankings served some good, they’ve now reached a stage where they’re little more than a festering, cankerous growth that continually sucks all things interesting and productive from the field, leaving behind a hellish wasteland of ressentiment, anxiety, and division (a state which only makes it too easy for the profession to be [further] exploited by and incorporated into the increasing monetization of education).

    Like

  39. sk Avatar
    sk

    I can’t help but feel that the response to the pluralisation of rankings is having a kind of moneyball effect: new kids with statistics – and admirable transparency about the origin and organisation of those statistics – show the claims of the old timey talent scouts to be based on not much but their own self-certainty.
    Perhaps the best that can be said about these measurements is that they measure different things; a proliferation of measurements, or rankings in relation to different measurements, could then only be a good, and thus CDJ’s efforts towards this end are much appreciated. If we are stuck with rankings, I would rather they measure something more tangible than “prestige,” which only serves to shore up what Ed so rightly terms these imaginary merit hierarchies – ostensibly objective-looking, but forged out of political forces, not least of which is philosophy’s ‘civil wars’ as Alcoff put it. From the discussion above, however, it sounds like what prospective graduate students need is information, not necessarily rankings.

    Like

  40. Bharath Vallabha Avatar
    Bharath Vallabha

    Philosopher #34, I agree there shouldn’t be one official rankings endorsed by the APA. That would be pretty oppressive. I don’t think the APA should actively be doing any rankings. Rather, rankings should be registered through the APA. Kind of like getting a permit to have a set of rankings. This wouldn’t be that much work for the APA, but it would have the important benefit that at least in theory all rankings based on prestige would be seen for what they are: different groups of philosophers categorizing the philosophical landscape in different ways. Having a plurality of such rankings seems to me healthy and can be philosophically interesting.
    I think rankings are inevitable because something has to do some of the work which local knowledge used to do in the past. At the same time, rankings based only on objective measures such as placement, publications, etc have two problems. First, as others have mentioned above, it will feed into the commercialization of academia. Second, having rankings only on measurable outputs is psychologically unrealistic. At the end of all the metrics, one still wants to know whether the philosophers at a given department are doing good work, and for that some sense for how other philosophers evaluate them overall is helpful.
    So I think rankings based on prestige can be good. But only if there isn’t a de facto monopoly regarding such rankings. Now one might say: “But what is stopping any group from starting their own rankings based on prestige? Anyone can start that and there is no need for the registration through the APA idea.” Yes, any group of philosophers can start their own prestige rankings. But it might be hard for them to break through institutional momentum and get beyond the sense in the profession that it is just some fringe rankings. Even a simple act of registering through the APA can give such fledgling attempts some help in finding their footing.

    Like

  41. Carolyn Dicey Jennings Avatar

    Good ideas. Thank you for the links and suggestions!

    Like

  42. Carolyn Dicey Jennings Avatar

    Good points. Thanks for the links!

    Like

  43. Pavlos Avatar
    Pavlos

    I dare to voice an opposing view to the general (and surprising) consensus here. I think one should think carefully about what kind of discipline philosophy is before jumping to the conclusion that “hard data” are a good measuring factor. To give an example – if one were to enlist as a student with a painter who sells the most paintings, one would probably need to enlist with, say, a producer of kitsch beaches from Florida, rather than, say, with somebody highly regarded in art-circles but with little sales. Similarly, the fact that somebody publishes a lot of articles in good journals might not be such a bit indicator of quality as one might think. After all, should I trust the judgment of two random (and unknown) peers (sometimes even grad students) or the opinion of known and (explicitly identified) experts? The reputation survey is not as random as one might think – it measures the prevailing opinion of the expert philosophy community as to where things are happening (I doubt Fichte walked to Koenigsberg on the basis of Kant’s placement or publication record). It is true that perhaps PGR is not done ideally or that it is used in variety of ways but there is no doubt that this is what it measures. It never claimed that it measures the actual quality of graduate student instruction – for that Leiter always advised, explicitly, that people need to consult placement records, grad students and so on. I am not a big fan of the PGR – but at least it measures something of the sort on which I rely (as when I ask my colleague who is an expert in X, and I want to learn about X, who I should read rather than go do stats on the computer as to who published most in X).
    Statistics is tricky – if the data is incomplete it is to my mind pretty bad to publish any results based on that (imagine I would rank my students in class on the basis of incomplete – in a random way – records of their work and publish it and then said – look – it’s incomplete, so help me fill it out). You let out a word (even if patently false) and it’s out there (Obama is a muslim…).
    But even if the data is complete (which means, I think, departments will need to start employing a stats keeper – think this enterprise then starts to remind of NFL or NHL – a pretty hilarious picture of philosophy community), what is the intended purpose? A fantasy league? A student can look at who placed most (or percentage-wise most) people in the last 10 years, but what does it tell him about his own placement chances in 5 years? It’s much more interesting to see what kind of jobs people are getting from the place and for that the websites now provided a lot of info (thanks to PGR). Nothing the student needs a ranking for.
    In any case, this is a bit rambling and a bit incoherent, but I thought it might be worth pointing out that the agreement expressed here about the usefulness of the kind of rankings proposed is not as uniformly accepted as the discussion might suggest.

    Like

  44. anonn Avatar
    anonn

    @Pavlos

    A fantasy league?
    I looked into modifying a fantasy draft program to analyze and display the rankings. 🙂
    I wanted to see what a first pass at department analysis using the Philpapers data looked like. So I downloaded the CVs of the Philpapers editors and broke them into three departments based on last name: dept1 is A-I, 10 people, dept2 is K-N, 10 people, and dept3 is M-Z, 11 people.
    The program:
    http://slexy.org/view/s20A5vvBhe
    The arbitrary rankings:
    http://slexy.org/view/s2iFH9YUZR
    The bibtex CVs from Philpapers:
    http://ge.tt/9pIDrRn1/v/0
    The output for dept1:
    http://slexy.org/view/s2rwrlgkiT
    The output for dept2:
    http://slexy.org/view/s2agmvxR3b
    The output for dept3:
    http://slexy.org/view/s20lC79Vpu
    Python: https://www.python.org/
    BibtexParser python library: https://bibtexparser.readthedocs.org/en/latest/

    Like

  45. Susan Avatar
    Susan

    Pavlos, thank you for some thought-provoking insights about rankings. I don’t think the painting analogy holds, unless many philosophy departments have stellar placement records despite producing many graduates who do sub-par work. Furthermore, if a department that isn’t highly-regarded for quality is generating lots of job placements, then perhaps it’s time to reconsider whether those kitsch beaches might be far better than people had assumed.
    The same is true for publication records. I suppose some professors could have many lower-quality or less-competitive publications, such that their work is less impressive than the output of someone who has published only a few very high quality articles. Raw numbers of publications alone won’t tell the whole story. But if a department is consistently highly-ranked in publications and is placing students in solid jobs, that information might be equally or more valuable to prospective students than information about where faculty think a student could go to be trained by the loftiest experts.
    I’m not too worried about departments collecting the data because we’re all under increasing pressure from administrators to provide just this sort of data in the first place. In addition, a student may not know what his placement chances are relative to others at the same program, but how else can he make any prediction about placement chances? Knowing the general reputation of the faculty is not as helpful as knowing whether that reputation has translated into jobs for students.
    I agree when you say, “nothing a student needs a ranking for”, in the sense that I don’t think “rankings” are needed at all. Rather, information is needed that the students may sort and compare as they arrive at their own evaluative judgments. I am curious about what information you obtain from the PGR on which you rely – I don’t follow those rankings myself so may have missed something about what they provide. I thought they were rankings of department reputation based on a survey of a limited number of faculty in philosophy. If I want to know who to read in a particular area, I don’t look to see who has published the greatest number of articles in an area, but I do look to see who is most frequently cited by researchers in the field, or whose arguments have most powerfully influenced subsequent research. In a sense, this is a way of getting a recommendatino about who’s an expert, but it functions rather differently than the PGR insofar as it involves experts selecting citations directly relevant for their own research.

    Like

  46. Pavlos Avatar
    Pavlos

    Susan:
    1) the example of art is not an analogy for how to think about placement data in relation to quality. It’s an example about how one could (but doesn’t have to) think about how philosophy works and why quantitative data might be misleading. It only explicitly relates to quality of publication vs quality of publication (not to placement), but I did not mean to say that it is an analogy to be taken on board. There might be many views, but it is not a straightforward thing for me that quantitative data have much usefulness at all.
    2)There are very influential people who have published very little (Paul Benacerraf, for example, comes to my mind, many others can be cited) and there are very much not influential people who have published widely and in highly-ranked journals and are cited a lot – we often quote people because they published recently in a journal and their view needs to be accounted rather than because they actually influenced us. Philosophers influence each other in a variety of ways, many are quite subtle and do not come through in people they quote. Much depends on how one conceives of philosophy, I guess. For me, it’s a personal engagement with questions that concern the human condition – something that cannot be quantified very well. And the people who influence me or speak to me, philosophically, are often not people I cite in my scholarly work. In fact, many of them influenced my thinking about what i do deeply, but are not acknowledged in anything I write (as when a philosopher deeply influenced by Wittgenstein writes on Aquinas).
    5) I am worried by the administrative push to treat philosophy as a kind of science – the forms and evaluations they now force many departments to follow or fill out are often BS for philosophy. The moves done here, the drive to quantitative statistics, seem to me to embrace this business model of academy. Personally, I don’t care much for it and I find it worrying that this is the way the field is developing.
    6) I do not rely on PGR in any way for selecting what to read. I don’t even know what that would mean. I said that it uses a measure of the same kind that I use when i ask a colleague for recommendation about what to read.

    Like

  47. anonn Avatar
    anonn

    re: 5
    It is already too late. Philosophy and humanities are already losing funding and prestige because of how people are measuring things. Other disciplines have made their case and if we don’t beef up our rhetoric, which includes statistical analysis, we will continue to lose out.
    I do not advocate the business model of philosophy, but, if we want to counter such arguments, we need to be able to use them in our favor.

    Like

  48. p Avatar
    p

    You might be right. But see the recent essay by G. Risen. Perhaps, we cannot do so by playing by the sane rules as sciences – if we do we will lose.

    Like

  49. anonn Avatar
    anonn

    Using Dr. S. Kate Devitt’s journal rankings [1,2], I ranked the 3 departments based upon their bibtex CVs from Philpapers.
    Department 1 came out clearly on top, having the highest overall rank, rank per paper (more papers in higher ranked journals, than lots in lower ranked ones), and having their percentage of ranked papers increase over the last 20 years.
    Departments 2 and 3 were pretty even, with 3 having an edge over 2. Department 2 appears to have been publishing less in the ranked journals over the years, though their percentage of ranked papers published to all their papers published is higher. So even though Dept 2 publishes less in the ranked journals nowadays, they still publish very respectably.
    Department 3 publishes the most out of the departments (and also has 1 more person in it than either of the others) and maintains the same average quality of ranked publication as Department 2. They also have published slightly less in the ranked journals over the years, but there was not as significant a drop off as happened in Department 2.

    Like

Leave a reply to sk Cancel reply