Jason Mitchell, a Harvard social neuroscientist, gives an argument against scientific replication, and a defense of unreplicated science, here.   His argument, in a nutshell, is that all we ever learn from the failure to replicate an experiment is that the attempted replicator is a lousy experimenter.  Quoting Mitchell: 

  • Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.
  • Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way.  Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.

Mitchell is giving, pretty much out of a bottle, what Harry Collins and Trevor Pinch call "The Golden hands" argument.   "If you can't replicate what I did in the lab, that shows I have golden hands and you have iron claws."   Interestingly, C&P generally believe that this is always a rationally defensible claim that any scientist can make.   Of course, unlike Mitchell, C&P understand that was is sauce for the goose is sauce for the gander.  If the above is, as both M and C&P maitain, always a defensible claims, then so is its converse:  "If I got the opposite result in the lab and you can't replicate it, then that shows I have golden hands and you have iron claws."

The problem is, if all the above is true, then this would vindicate C&P's claim that all scientific reasoning is circular.   This is, after all, and for example, what they think the famous case of Pasteur and Pouchet and spontaneous generation shows.    Pasteur can always claim that Pouchet has iron hands, and Pouchet can always claim that Pasteur has iron hands, and this explains, from each one's point of view, why the other is getting results that don't accord with the first one's theoretical views.    

C&P are absolutely right to think that it follows from this that all scientific reasoning is circular.  It becomes a matter of definition, on this conception, that you have iron hands just in case you get results that don't accord with my theoretical views.   But scientific reasoning isn't circular.  And hence there has to be something wrong with this conception.    And what's wrong is that pace C&P, there are often, in the long run (and hopefully not such a long run that we are all dead, thought that may have been the case in the Spontaneous Generation case) independent means of deciding whose experiment is the valid one.  And that means that pace Mitchell, scientific replication, and the process of figuring out who has the iron claws and who has the golden hands, is a crucial part of the scientific enterprise. 

Posted in

6 responses to “A pretty awful argument against scientific replication (h/t Dan Weiskopf).”

  1. Jonathan Kaplan Avatar
    Jonathan Kaplan

    As I read C&P, the conclusion they actually defend (whatever they in fact believe) is not that science in fact is always circular, and that there is in fact never an “independent means of deciding whose experience is the valid one,” but rather the (much) weaker claim that sometimes (perhaps often) these disputes are “settled” in ways that, in retrospect, look like they failed to provide really good independent evidence. In those cases, good independent evidence (often) comes later, after the dispute has already been settled by other (less strictly speaking rational) means… (Similarly, I take C&P to be suggesting not that the “experimenter’s regress” is unsolvable, but that solutions that are actually rationally defensible usually require additional information from new sources, etc.)
    This has, I think, no bearing whatsoever on the awfulness of Mitchell’s frankly strange arguments re: the “golden hands” problem, which I think you nail.
    (Also, how weird that Mitchell wrote this thing about replication and didn’t discuss uncertainly over the prior probability that the effect in question is real? No mention of Ioannidis-style arguments at all? The “goldenness” of our hands may not be the issue; the fact that most things we look for aren’t there might be…)

    Like

  2. Eric Winsberg Avatar

    Well, when I teach the C&P, we discuss exactly this issue, is it “always circular” “sometimes circular” “often circular” or what. They are slippery on this. I’ll quote you the money line by heart (I’m in munich and don’t have my books) so I may get it a tiny bit wrong. “This controversy, like all scientific controversies, was settled not by facts and reasons but by death and the weight of numbers.” So, my reading is that they want to claim, in at least their punchiest passages, that all disputes get “settled” rather than settled.
    In any case, I think we agree that if Mitchell is right, then the most radical possible reading of C&P is right, and that can’t be a good thing.

    Like

  3. Jonathan Kaplan Avatar
    Jonathan Kaplan

    Agreed! If Mitchell is right, then the most radical possible reading of C&P is right, and that has got to be wrong. 🙂
    (I only have access to their second edition at the moment, where they did tone down a few things. There, in any event, they write that: “As in so many other scientific controversies, it was neither facts nor reason, but death and weight of numbers that defeated the minority view; facts and reasons, as always, were ambiguous.” Even that seems a little much to me, but at least not completely crazy!)

    Like

  4. Eric Winsberg Avatar

    Thanks! I knew there was some kind of universal quantification in the sentence “facts are reasons, as always, were ambiguous” and there is also the “so many” at the beginning.
    I think the book is at least quasi-deliberately vague on what its message is. Sometimes it reads as a warning not to trust science-textbook histories. Sometimes it reads as a defense of radical relativism.

    Like

  5. Joe Avatar
    Joe

    I don’t believe Mitchell’s argument is a “defense of unreplicated science.” Mitchell seems merely to be showing that the furor over the inability to replicate certain findings in social psychology might have more to do with the goal(s) of the researcher than with a genuine attempt to replicate the experiment of the original research.
    Some replicators set out to undermine the results of original research, rather than carefully reproducing the original experiment. There are some exceptions, of course, though these exceptions are few and far between given how heavily empirical research depends upon funding and how research questions are becoming more and more narrowly conceived to avoid heavily trodden research pathways. Supposed replicators either bungle the experimental design or manipulate the statistical results to find what they set out to prove.
    Should we dismiss replication experiments? No. Is that a defense of unreplicated science? No. Mitchell’s merely pointing out how naive it is to lean too heavily upon failed replication as a source of evidence against the original research. His argument seems to show that the attempt at replication is error prone (and, to my mind, he is saying that it’s subject to more potentially catastrophic errors than the original research), which is a very conservative conclusion.

    Like

  6. Eric Winsberg Avatar

    Well, I certainly agree that the views you are ascribing to him are more defensable than is a defense of unreplicated science. But I am not sure they take account of many of the claims he makes in the piece, including in the bullet points. To whit:
    unsuccessful experiments have no meaningful scientific value
    Unless direct replications are conducted by flawless experimenters [who dont exist–EW], nothing interesting can be learned from them.
    At any rate, none of this [attempts at replication] constitutes scientific output.
    etc.

    Like

Leave a comment