I thought I would make my inaugural post on NewAPPS a follow-up to Roberta's post about the retraction of the article in Food and Chemical Toxicology.  I don't want to continue the debate about whether the retraction was justified; that debate can continue in the original thread.   Here, I want to discuss one of the reasons why we should be paying vigilant attention to events such as these, and why their importance transcends the narrow confines of the particular scientific hypotheses being considered in the articles in question.  What I worry most about is the extent to which pressures can be applied by commercial interests such as to shift the balance of “inductive risks” from producers to consumers by establishing conventional methodological standards in commercialized scientific research. 

Inductive risk occurs whenever we have to accept or reject a hypothesis in the absence of certainty-conferring evidence.  Suppose, for example, we have some inconclusive evidence for a hypothesis, H.  Should we accept or reject H?   Whether or not we should depends on our balance of inductive risks—on the importance we attach, in the ethical sense, of being right or wrong about H.   In simple terms, if the risk of accepting H and being wrong outweighs the risk of rejecting H and being wrong, then we should reject H.  But these risk are a function not only of the degree of belief we have in H, but also of negative utility we attach to each of those possibilities.   In the appraisal of hypotheses about the safety of drugs, foods, and other consumables, these are sometimes called “consumer risk” (the risk of saying the item is safe and being wrong) and “producer risk” (the risk of saying the item is not safe and being wrong.)

 In recent work, the philosopher Heather Douglas brought this 1950s concept of inductive risk back to our attention, and added a new twist.   She argued that it is not only in the appraisal of hypothesis that we engage in balances of inductive risk, but also in the choice of methods.  Suppose I am investigating the hypothesis that substance X causes disease D in rats.   I give an experimental group of rats a large dose of X and then perform biopsies to determine what percentage of them has disease D.    But how do I perform the biopsy?    Suppose that there are two staining techniques I could use; one of them is more sensitive and the other is more specific.  That is, one produces more false positives and the other more false negatives.  Which one should I choose?   Douglas points out that which one I will choose will depend on my inductive risk profile.  To the extent that I fear consumer risk, I will choose the stain with more false positives.  And vice versa.   But that, she points out, depends on my social and ethical values. 

 In a subsequent paper, Torsten Wilholt argued that Douglas' insight gives rise to a puzzle: when is a choice of scientific methodology a case of bias?  If no methodological choice can be justified in a value-free vacuum, then what is the difference between selecting a method on the basis of a choice of values that, say, leans more to the side of avoiding producer risk, and choosing one that it is outright biased in the favor of industry?  Douglas' insights make this question more puzzling than it might have originally seemed.   Wilholt offered a useful suggestion:  a methodological choice counts as biased if it flouts an established, even if entirely conventional, methodological standard—in the absence of some adequate justification for doing so.  Whether or not one accepts Wilholt's solution to this normative puzzle—as a descriptive claim about how distributions of inductive risk get settled, it strikes me as exactly right.   Methodological standards are, qua conventions, encodings of the conventionally accepted, default balance of inductive risk between consumer and producer.

If you've skipped to the end for the punch line, it is this:  we should be very careful about attempts by producers to set the agenda vis a vis conventional standards.   And without getting bogged down in the particulars of the case Roberta blogged about (we can leave that for the other thread), we should be highly alarmed when there is even the appearance of a conflict of interest in play for someone who is exerting influence on the rigidification of a methodological standard for scientific experiments that test the safety of products that will come to market.  I will leave it to others to decide if the article Roberta linked to makes the case that such an appearance exists in her case.  But I smell something fishy.   Two facts in particular contribute to the ichthyesque odor.  The first is the appearance of a conflict of interest that arose when, right before issuing the retraction, the journal appointed a special editor with ties to Monsanto and a GMO-industry-funded group.   The second is the focus, in the letter of retraction, on methodological features of the study, including the breed of animal used—a canonical sort of methodological choice that can move the bar of inductive risk in either direction.

Posted in , , ,

12 responses to “Consumer Risk, Producer Risk, and the politics of commercialized science”

  1. Patrick S. O'Donnell Avatar

    Although I’m not familiar with the literature referenced here, the contours and concerns of this discussion strike me as uncannily similar to questions first raised in the context of the debate on nuclear power and waste disposal in a critical and original manner by Kristin Shrader-Frechette in her book Risk and Rationality: Philosophical Foundations for Populist Reforms (University of California Press, 1991).

    Like

  2. Patrick S. O'Donnell Avatar

    [Eric: Incidentally, while now at Notre Dame, I just learned that Shrader-Frechette was a member of USF’s Philosophy Dept. (along with Environmental Sciences) from 1987-98.]

    Like

  3. Eric Schliesser Avatar

    That’s a great post on an important topic, Eric!
    I had not read this paper by Thorsten, so thank you for calling attention to his follow-up to Heather’s excellent piece. I worry that Wilholt’s suggestion — “if it flouts an established, even if entirely conventional, methodological standard—in the absence of some adequate justification for doing so” — is too permissive for two reasons: (i) the epistemic, problem-solving, or consensus-generating advantages of the new methods may be superior to the established ones, while, say, the inductive risks may well be unclear at first (because, say, unexamined, willfully neglected, etc.). By the time the inductive risk is more apparent, then the new method may well be the established convention. (ii) What the previous (i) reveals is that both (a) the base-line may be problematic (because introduced in permissive fashion) and that “some adequate justification” is a low threshold, especially (I suspect) in scientific communities that value consensus (and little sensitivity to down-stream uses of the methods).

    Like

  4. Curtis Avatar

    Lyotard’s comments on this were truly prescient (The Postmodern Condition: A Report on Knowledge).

    Like

  5. Eric Winsberg Avatar
    Eric Winsberg

    Patrick: The Inductive Risk literature goes back to the 1950’s when the first arguments were introduced by Rudner and Churchman (and the term was coined by Hempel.) Torsten studied and did his assistantship at Bielefeld, which has strong ties to the philosophy department at ND.
    Eric: I probably didn’t do Torsten’s criterion justice, exactly. But I agree it might be harder than he makes it out to give, exactly, the N&S conditions for a method to be biased.. Still, I think his statement of the puzzle is extremely interesting, and more should be said about it by the “science and values” folks (of which I am a fellow traveler, at least). And I think the descriptive claim that conventional standards, when they exist, can set where the default balance of inductive risk lies, (which is all I rely on for this post) is dead right.
    Curtis: can you say more?

    Like

  6. Curtis Avatar

    Sure. It’s been awhile, but luckily I took noted and they’re readily available. Hope I transcribed them correctly. I think there’s only one edition though so the page numbers should work for everyone.
    “The state and/or company must abandon the idealist and humanist narrative of legitimation in order to justify the new goal. In the discourse of today’s financial backers of research, the only credible goal is power. Scientists technicians and instruments are purchased not to find truth but to augment power.” (46)
    “By reinforcing technology one reinforces reality, and one’s chances of being just and right increase accordingly. Reciprocally technology is reinforced all the more effectively if one has access to scientific knowledge and decision making authority.” (47)

    Like

  7. Patrick S. O'Donnell Avatar

    Eric,
    I got the point about the origins of the literature, I was simply noting what is being done with it: I suppose the issues raised in your last paragraph were raised in the 1950s as well?

    Like

  8. Eric Winsberg Avatar
    Eric Winsberg

    No. Not at all. I should look more carefully at the KSF book. (We never overlapped at USF, btw. I think she left two years before I got there.)

    Like

  9. Mark Lance Avatar

    Of course all the same, and probably more, applies to assessments of utility, right? I mean not only are there going to be differing senses of what counts as utility across ideologies – in the nuclear power case, does the centralization of control of energy count as an aspect of negative utility? Certainly not in any industry assessments of risk – , social positions, etc., but on any honest theory of utility there will be recognized inductive risk here also. So the problems are going to compound in nasty ways I suspect.

    Like

  10. Eric Winsberg Avatar
    Eric Winsberg

    Yes. “producer risk” and “consumer risk” are just one possible dimension of inductive risk. the claim, put most generally, is just that every decision you can make in science will involve balancing out the four standard boxes of a decision matrix–whatever utilities attach to the four possible outcomes.

    Like

  11. Mark Lance Avatar

    I’m just emphasizing that filling in those boxes is also often a scientific question.

    Like

  12. MarkWD Avatar

    Working out how to weight a decision matrix is another scientific question in itself!

    Like

Leave a comment