I noted in another post the apparent difference in impact of the Philosophical Gourmet ranking of one's PhD granting institution on tenure-track placement according to gender, following up on posts elsewhere (here, here, and here). In this post I want to follow up on a speculation that I made in comments that the apparent difference in impact is due not to a difference in the way prestige impacts women and men on the job market, but due to a difference in the way that the Philosophical Gourmet tracks prestige for areas that have a higher proportion of men versus areas that have a higher proportion of women. 

You may already be familiar with work by Kieren Healy that shows that the Philosophical Gourmet ranking especially favors particular specialties: "It's clear that not all specialty areas count equally for overall reputation… Amongst the top twenty departments in 2006, MIT and the ANU had the narrowest range, relatively speaking, but their strength was concentrated in areas that are very strongly associated with overall reputation—in particular, Metaphysics, Epistemology, Language, and Philosophy of Mind."

In comments on a post elsewhere, Benj Hellie listed the numbers of women and men who were responsible for evaluating departments for the Philosophical Gourmet in 2011 by specialization, which I calculated at the time to be 12% total women reviewers (but an average of 15% women per category). The Board of Advisors for the Philosophical Gourmet has 10 women, making up almost 18% of the board. But even this last percentage is lower than the percentage of women employed in philosophy, which is listed here at 21%. (It is closer to the percentage of women employed by Gourmet ranked institutions, which is listed at 18.5%.) One possibility is that the low number of women involved on the board and in evaluation is partially responsible for the fact that the Philosophical Gourmet overall ranking especially favors M&E. I was curious as to the distribution of areas of specialization of board members and how it broke down by gender. To discover areas of specialization I went to home pages (or, when these were lacking, Wikipedia pages) and used either first listed area of specialization or specialization following "best known for" language. Here are some pie charts that illustrate the proportional make-up of the board by AOS (An Excel spreadsheet with the information I used to create the pie charts is here):

BoardAOS

MenBoardAOS

WomenBoardAOS

 

As you can see here, even for the small sample of women on the Philosophical Gourmet Advisory Board (10), women tend to specialize in areas that are less responsible for overall rank. 

I next looked at tenure-track placements from 2011-2013, since I already have data on the first listed AOS for these placements, as well as the gender of the candidates (for commentary, see the post here). Below are pie charts for the entire set of placements, as well as placements broken down by gender:

AOSplacement

MenTT2
WomenTT

The same basic trend that we see in the board members shows up here. Namely, a smaller percentage of women occupy M&E fields than other fields of philosophy. What does this tell us? Women employed in philosophy (to say nothing of women in graduate programs) tend to specialize in a broader set of areas than men employed in philosophy, while the areas with a higher proportion of men are just those areas that make the biggest contribution to the Philosophical Gourmet overall ranking. No wonder, then, Gourmet ranking makes less of an apparent impact on women than on men; hiring committees may well be using prestige equally for men and women, but the Gourmet tracks prestige better for areas that have a higher proportion of men. 

How could the Philosophical Gourmet Report improve on this trend? An obvious move would be to make a concerted effort to involve more women in the ranking process, both on the advisory board and in evaluation. Brian Leiter has noted elsewhere that a greater proportion of women are invited than accept these roles, which is noteworthy, but to me this just means that a more creative effort will have to be made to keep the board and the evaluators representative of the field. In concert with this effort I think it would help the Philosophical Gourmet to involve more philosophers from areas that currently have poor representation. It would be great to know what the proportion of philosophers is by area of specialization for the field as a whole, and I would bet that this is different from the proportion of philosophers on the advisory board. How different, I don't know, but at least M&E fields appear much better represented than value theory fields, where these each have an argument for being the most central areas of philosophy. Moreover, if the proportionality of areas of specialization for tenure-track jobs is representative of the proportionality of areas of specialization for the field as a whole, then value theory fields are grossly underrepresented on the advisory board. 

Do you have ideas for what the Philosophical Gourmet could do to improve? Leave them in the comments (which I will moderate). 

Posted in , , ,

12 responses to “The Gourmet Ranking and Gender: How Can It Improve?”

  1. Matt Avatar

    One thing I worry about with some of this type of study is that I’m not really sure how we measure “breath”- I don’t think that counting areas or sections of the PGR will do, as that would require (say) treating decision theory/rational choice theory or philosophy of law as equally important or central as epistemology or metaphysics, and that strikes me as implausible (even though I’m personally more interested in the former subjects than the later.) I don’t think that should lead us to just accept the status quo, or not try to improve, but I worry that there are some serious difficulties in comparison between the categories and that therefore just counting people will lead to distortion.
    My own preference for modifying the evaluation pool would be to include people working in the history of the various fields included to a higher degree than they are currently. There is some worry, I suppose, that this would lead to double-counting of the history of philosophy, but I think it would make the rankings richer and the pool of evaluators deeper.

    Like

  2. Carolyn Dicey Jennings Avatar

    Matt,
    I agree that there are breadth concerns beyond what I mention here, but I am here mostly interested in the interaction of gender, particular areas of research, and the PGR. I am troubled by the fact that the PGR seems more representative of areas of research with a higher proportion of men, especially since it means that the PGR is not as predictive of hiring for women. Since both women and men use the PGR to select graduate schools, it may be letting down women to some extent (in the sense that women are more likely to choose areas of research that are not well represented by the survey and thus their chances of getting a job are less dependent on the ranking of their institution in the PGR).

    Like

  3. Jenny Saul Avatar
    Jenny Saul

    I think also a targeted effort to broaden the areas represented by the experts. And, in my view, dropping the whole department rankings, which are esp. prone to bias and unlikely to be based on detailed knowledge. I argue for the latter (among other things) here: http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9833.2012.01564.x/abstract.

    Like

  4. BLS Nelson Avatar

    Thanks for your posts, Carolyn. Always interesting.
    I have one potential worry about method. What do the results look like if you factor in all of the areas of specialization of advisory board members, and not just the first listed? If folks feel gunshy about promoting their work in ethics and value theory due to the perception that these are less prestigious areas, then they’re probably not going to list them first.

    Like

  5. Carolyn Dicey Jennings Avatar

    What would you say to the idea of having overall brackets, rather than overall rankings? For the 2011 Overall Ranking of English Speaking Programs it could look like this (bracketing mean scores of 4-5, 3-3.9, 2.7-2.9 and then alphabetizing by bracket):
    Harvard University; Massachussetts Institute of Technology; New York University; Oxford University; Princeton University; Rutgers University, New Brunswick; University of Michigan, Ann Arbor; University of Pittsburgh; Yale University
    Australian National University; Brown University; Cambridge University; City University of New York Graduate Center; Columbia University (incl. Barnard); Cornell University; Duke University; Indiana University, Bloomington; King’s College, London; Ohio State University; Stanford University; University College London; University of Arizona; University of California, Berkeley; University of California, Irvine; University of California, Los Angeles; University of California, San Diego; University of Chicago; University of Colorado, Boulder; University of Massachussetts, Amherst; University of North Carolina, Chapel Hill; University of Notre Dame; University of Pennsylvania; University of Southern California; University of St. Andrews/University of Stirling Joint Program; University of Texas, Austin; University of Toronto; University of Wisconsin, Madison
    Birkbeck College, University of London; Georgetown University; Johns Hopkins University; Northwestern University; Syracuse University; University of California, Riverside; University of Edinburgh; University of Leeds; University of Maryland, College Park; University of Miami; University of Reading; University of Sheffield; University of Sydney; University of Virginia; Washington University, St. Louis

    Like

  6. Carolyn Dicey Jennings Avatar

    I am not sure. I didn’t notice much of this (I was on the look out), but I could try to take that into account in a future iteration. How would you like to see it work? An average of the category numbers representing all listed areas of research? This could have its own problems, of course, but might solve the problem you mention.

    Like

  7. Carolyn Dicey Jennings Avatar

    Another bracketing option would be to use Kieran Healy’s mean overall ranking by specialization (discussed here.) If you take the top bracket, for example, by looking at the means from 0-20 for the data based on the 2006 PGR (looking at the bolded lines here), the top bracket would be (alphabetized):
    Australian National University; Cambridge University; Columbia University (incl. Barnard); Harvard University; Massachussetts Institute of Technology; New York University; Oxford University; Princeton University; Rutgers University, New Brunswick; University of Arizona; University of California, Berkeley; University of California, Los Angeles; University of Michigan, Ann Arbor; University of North Carolina, Chapel Hill; University of Notre Dame; University of Pittsburgh; University of St. Andrews/University of Stirling Joint Program; University of Texas, Austin; University of Toronto; Yale University

    Like

  8. Susan Avatar
    Susan

    There’s an easy solution to this: stop including overall rankings, period. Rank each area of specialty. List all the raters in that area. List ALL the schools mentioned by the raters. The ratings then mean something fairly useful: the schools most often mentioned by experts in a field, when thinking about the best places to be trained in their sub-field. If a student isn’t yet sure what specialty to study (which I think is a good thing), he or she can browse rankings in several areas to get a sense of which names keep coming up.

    Like

  9. Joe Avatar
    Joe

    I think this is all useless. As far as I can tell, the original purpose of the report was to rank graduate programs qua graduate programs. But this turned out to be impossible, since PGR 1) does not track placement record; 2) does not track the culture and effectiveness of advising; 3) does not track accessibility of faculty; 4) does not track climate issues; 5) does not track financial and other conditions. It also does not track publication record/actual impact of the faculty, and so on. It relies, purely, on reputation of some faculty in particular departments among chosen people. My guess is that people evaluating a department rarely, if ever, know all the people and their work in the department they evaluate (this often concerns their own field) and tend to evaluate the department simply on the basis of the few names they recognize/general consensus about the quality of a given university/their particular view of what matters/is good/who’s in their camp. I do think that, despite all of this, PGR is useful. But I do not think that modification of the sort discussed would do much.

    Like

  10. Carolyn Dicey Jennings Avatar

    It seems to me that the only way you could hold the PGR to be useful and the above to be useless is if you thought it useful to see what a gender-skewed and specialization-skewed populace thinks about reputation and prestige. But if you are interested in reputation and prestige as such, it seems to me that you would prefer to have this information from a sample that is representative of the set. To me, this means including more women. As I say above, there are 12% women among the full set of evaluators. The field is made up of 21% employed women philosophers. Moreover, the latter tend to specialize in different areas than men AND specialization has an impact on perceived prestige (see Kieran Healy’s post here, discussed above: http://leiterreports.typepad.com/blog/2012/03/ratings-and-specialties.html). I agree that the overall ranking may have other problems, but trying to make the evaluators representative of the discipline seems like it could only improve it.

    Like

  11. BLS Nelson Avatar

    It seems to me that it is very hard to deny that this is at least a potential problem, once we accept the inferences about the relative prestige of LEMM fields from Healy’s surveys of the Top Four. For that reason I would think that averaging of specializations would be more of an asset than a liability. But I’m naive about sociological methods, and if push comes to shove I would defer to your expertise if you think the reverse is true. It is admittedly a lot of grunt work, so maybe you (or I, or whoever) could just look at a small sample of department specializations and tell us what you find, instead of going through the lot of them all over again.
    That said… in an ideal world, I would like to see Prof. Healy’s analysis widened in scope. An analysis based on The Big Four strikes me as potentially misleading for all kinds of reasons. e.g., given my own research interests, when I think about prestigious journals, “Philosophy & Public Affairs” tops my list; if I were ever to publish in it, I know that I could be proud of the achievement. Any analysis that leaves P&PA out is not especially useful.

    Like

  12. Matt Avatar

    Joe said,
    “As far as I can tell, the original purpose of the report was to rank graduate programs qua graduate programs”
    I think this is almost completely backwards- the rankings have always only been a ranking of faculty quality, which has been noted to not be a ranking of any of the other things you mention, or to be a rankings of “graduate programs qua graduate programs”, whatever that would mean. (I’ll admit I have no idea what it would mean, and expect that’s a good reason to not try to make a ranking to measure it.) Consider this statement from the most recent gourmet report, but which is not, I think, new to it:
    The rankings are primarily measures of faculty quality and reputation. Faculty quality and reputation correlates quite well with job placement, but students are well-advised to make inquiries with individual departments for complete information on this score.
    There are other disclaimers like that, and Leiter regularly makes others, noting that the PGR can’t tell you anything about what it’s like to be a grad student in the department, if faculty are accessable, etc. Now, do some people, despite the explicit warnings, misuse the rankings, or think they are supposed to be doing something else? Surely. But, I don’t know what anyone is supposed to do about that, expect feel bad for people who are so foolish or who can’t read or whatever. But the rankings have always been explicitly rankings of, and only of, faculty quality. You might object that they are not good at that. There are better or worse complaints along this line. But complaining that they do a bad job of things they are not meant to do is not really a good objection.

    Like

Leave a comment