Hugo Mercier sent me this response (below) to my blogpost The invisible hand of argumentative reasoning doesn't work so well – so what can we do about it? Thanks to Hugo for this response! 

 Argumentation gets a bad press. It’s often portrayed as futile: people are so ridden with cognitive biases—less technically, they are pigheaded—that they barely ever change their mind, even in the face of strong arguments. In her last post, Helen points to some successes of argumentation in laboratory experiments with logical tasks, but she doubts whether these successes would extend to other domains such as politics or morality.

I think this view of argumentation is unduly pessimistic: argumentation works much better than people generally give it credit for. Moreover, even when argumentation fails to meet some standards, the problem might lay more with the standards than with argumentation. Here are some arguments in support of a view that is both more realistic in its aspirations and more optimistic in its depiction of argumentation—we’ll see if these arguments can change Helen’s mind about the power of arguments.

As Helen pointed out, the argumentative theory of reasoning that Dan Sperber and I developed explains why reasoning is biased—in particular, why it tends to look for reasons that support the reasoner’s point of view (myside bias or confirmation bias). According to this argumentative theory, when reasoning produces arguments, its function is to convince others, not to check the validity of the reasoner’s position, and so finding arguments that support the reasoner’s position makes sense.

By contrast, when reasoning evaluates arguments, its main function is to let the reasoner be convinced by good enough arguments. Argument evaluation should be as objective as possible, so that people can accept strong enough arguments even if they challenge the reasoner’s prior beliefs, or if they come from untrustworthy sources.

 Helen cites some research purportedly showing that argument evaluation is in fact heavily biased. In these experiments, participants are given an argument—in the interesting condition, an argument that challenges their beliefs—and they are then asked to evaluate the argument. What typically happens is that people read the argument, but they do not find it strong enough to simply accept its conclusion. As a result, they start looking for arguments to support their rejection of the conclusion, as one would in a dialogic setting. Many of these arguments will point to flaws in the argument to be evaluated, making for a particularly critical evaluation.

If this interpretation is correct, then the only thing these experiments reveal is that the production of argument is biased (as expected), not that the evaluation per se—the brief phase that takes place when people read the argument—is biased. As it happens, I’m working on a paper that defends this interpretation with new experiments revealing that when a more appropriate methodology is used, argument evaluation does not seem to be biased.

Even if the reader was to trust me regarding the interpretation of these experiments, the objectivity of argument evaluation might seem hard to reconcile with the perceived failures of real life argumentation. If argument evaluation is so objective, why is it supposedly so hard to change people’s minds on so many issues? 

If it is to be adaptive, argument evaluation should lead people to accept beliefs and decisions that are, on the whole, adaptive. This means that argumentation does its job when it rejects an argument that would lead to potentially less adaptive beliefs or decisions. This is true even if other people think the argument is strong, and if the reasoner cannot adequately defend her rejection of the argument. I would guess that this explains many so called failure of argumentation in politics, religion, and morality.

Take politics for instance. Weeden and Kurzban have recently argued that contrary to some political science received wisdom, people’s opinions often track their interests fairly well—as one gets rich, one tends to want lower taxes for instance. If they are right, then political argumentation should not be expected to make a massive difference. If it did, it would lead people to vote against their interests.

 Weeden and Kurzban’s analysis also applies to some moral opinions. They suggest, for instance, that opinions about promiscuous sex are partly self-interested and depend on one’s life choices. Again, to the extent that Weeden and Kurzban are correct, then one should not expect argumentation to have so much effect on these opinions.

To many this standard for sound argument evaluation might seem depressing, as it means that it’s going to be hard to convince everyone of whatever political and moral beliefs we tend to favor. I, on the contrary, find this quite reassuring. I’d rather live in a world in which it’s too hard rather than too easy to convince people to act against their interests.

This being said, argumentation is still quite powerful. Even if sometimes the changes it leads to are small (at least, smaller than some would like them to be), they tend to be in the right direction. This is true not only when people tackle logical tasks in the lab, but also:

I’m not saying that argumentation is all-powerful—I could write another post on some of the reasons it sometime fails to change people’s mind even when it should. But it is vastly more effective than many people give it credit 

Posted in , ,

7 responses to “In defense of argumentation”

  1. Anon Avatar
    Anon

    Admitting the self implicating nature of these arguments, I must admit I have deep bias against the sentiment that “I’d rather live in a world in which it’s too hard rather than too easy to convince people to act against their interests.”
    That’s because I think what’s most important is the effectiveness of moral and political reasoning, which often seeks to persuade us to act on others’ interests in addition to our own and, indeed, sometimes against our own interests.
    It’s also because I think what’s most important in moral reasoning is reasoning about foundational beliefs, and so effectiveness requires persuading people to change their beliefs about what their true self-interests are. (It seems far from obvious to me, for example, that it’s in the true long-term self-interest of the wealthy to continually lower taxes.)
    This in turn makes me wonder about what’s really going on when moral reasoning is effective. Group reasoning may expose errors in our conception of self-interest, and our degree of shared interest. But then it’s not reasoning that overcomes bias, but bias on a social rather than individual scale–leaving foundational assumptions about shared interests unaffected.

    Like

  2. Helen De Cruz Avatar

    Hi Hugo: Thanks for these comments! So if I understand you correctly, you argue that argument evaluation is less biased than it seems from some of the data I’ve cited (and that you also have empirical evidence to back this up). “Argument evaluation should be as objective as possible so that people can accept strong enough arguments even if they challenge the reasoner’s prior beliefs or if they come from untrustworthy sources” – however, as you acknowledge the power to persuade arguments is limited, probably for good adaptive reasons (for one thing, if we had to change our minds with each good argument we heard, that would be a difficult way of living one’s life).
    I am wondering what you make of the following: I’ve now finally come to write up results of a survey of over 800 philosophers asking them to rate 8 arguments for theism and 8 arguments against theism (I placed the main finding in the paper I have in Topoi in your special issue, but the present paper provides a much more detailed analysis, looking at the arguments individually).
    Unsurprisingly, the philosophers’ beliefs (theism, atheism, agnosticism) predicted to a significant extent how strong they thought these arguments were. It’s no surprise that philosophers who were theists thought the arguments for theism were strong, and that the arguments against theism were weak, and that the opposite pattern held for atheists. Correlations between religious belief and perceived strength of argument were quite strong, e.g., an r score of -.483 for the cosmological argument. If argument evaluation is objective, how can we explain these strong correlations? Would we expect the views of philosophers to be so colored by their prior beliefs (this also included philosophers of religion) on a model where argument evaluation is objective?

    Like

  3. r Avatar
    r

    Re Helen @2: I am not sure how these surveys are set up, so I’m not sure if this is relevant. But: I would expect a strong correlation between (a)theism and the rating of (a)theist arguments among philosophers not just because their prior beliefs illegitimately color their uptake, but because (a)theists non-coincidentally subscribe to broader doctrines about justification, explanation, and so on which determine whether the (a)theistic arguments in question are any good. This is just to say that they are aware of general decision points determining what the good arguments are and they accept a roughly consistent package with regards to them. But then it is not surprising that they tend to classify the arguments that support their view as good and the arguments against as bad–rather, the contrary would be quite surprising.
    But, as I said, I don’t know the methodology here. So, for instance, this sort of explanation wouldn’t apply to correlations between religious belief antecedent to entering philosophy and philosophical rating of arguments (except on the very generous supposition that laymen also have suitably firm general views about e.g. justification and explanation which they hew to). That would still seem to need explanation in terms of biased reasoning to some degree on one or both sides.

    Like

  4. Hugo Mercier Avatar

    Being persuaded to take others interests into account in addition to our own is fine — indeed, it is such non-zero sum games that constitute cooperation, which is why we communicate in the first place.
    However, if people were easily persuaded to act against their interests, then every country could turn into a version of North Korea that would not even need the camps to scare people into submission. Being easily persuaded doesn’t mean only being easily persuaded to become more liberal…

    Like

  5. Hugo Mercier Avatar

    I guess I would offer the same explanation as the one in the post.
    The philosophers read the argument.
    Those who agree with the conclusion basically stop there and attempt to guess how strong they think the argument is. This is not as easy as it sounds, btw: I would say we don’t have introspective access to just how we evaluate an argument, but that we use our evaluation of the conclusion — which takes other factors into account, including our prior beliefs — as a proxy to some extent.
    Those who disagree with the conclusion are not convinced to change their mind by the argument, and so they start generating counter-arguments, as they would in a discussion. These counter-arguments will often point to flaws in the argument, leading to a more critical evaluation.
    What’s creating the bias here is the production of argument that takes place after they have read and initially evaluated the argument. I’m not saying that it isn’t interesting that the production of arguments is biased, but that this bias is consistent with our theory.

    Like

  6. Anon Avatar
    Anon

    Hugo,
    I should clarify that I don’t think the ability to be persuaded against self-interest is always a good thing. I only meant that the bias toward self-interest is often a bad thing.
    So, if I’m reading you right, you’re saying that on balance bias toward interest is better than no bias at all. I want to say that no bias would be better–it can have bad consequences, but so can the bias toward self interest.
    Again, I don’t find the examples convincing. I don’t think N. Korea is a case of people persuaded against self-interest. First, they’re not persuaded; they’re coerced. Second, the ruling powers are clearly acting in self-interest. Third, to the degree that their victims tolerate the situation, it’s clearly out of self-interest, avoiding the risk that open resistance would involve.
    So, I think example supports my ambivalence about self-interested reasoning: oppressive states are overcome if people recognize 1) that the risk to immediate self-interest is morally necessary and 2) that their assumptions about their self-interest may be mistaken, the benefit of immediate risk may be outweighed by long-term game.

    Like

  7. Identitarian Avatar

    For what it’s worth. Identities. There are many Identities, and we scatter from one Identity to the next, we feed on multiple Identities at the same time, we assign prominence and importance to various Identities at various times. One such identity is the self, of course; and then you have Religion, Nation, State, Party, job, friends, family … you name it. This is step one.
    The beauty of reasoning, of knowledge, the glorious research of truth, the quest for how things are; but this is just one piece of the cake. We do act in this wilderness, we change the world, we process, we modify, we intervene in what is happening, we pursue our goals; we fight a quest for how things should be. Not enough: we suffer, we enjoy, we fear, we triumph: we feel; we depend on the emotions that shake us. There are three axis, the cognitive, the moral and the emotional approach; you want some greek, call them theoretic, ethic and esthetic. On these three axis we project our being and our Identities, we have to consider these projections together to understand what is what. This is step two.
    Step one plus step two make step three. Identities prescribe knowledge, moral, emotion. Oh, nothing mechanic: there exists a general agreement of what is reality, how logic is supposed to work, what is morally acceptable and what is reprehensible, which feelings are allowed and what is inappropriate to experience. Each Identity only provides a partial Bible and we have to put together the overall picture by ourselves. Identities mainly play on Attention and Communications, and there are other actors involved in the same game. Still, our momentaneous mix of Identities determines our picture of the world, that is also reinforced by the coherence on how the world should be and by the satisfaction we have that it is going this way. Nothing is going to brake into that rock. Sorry: no room for argumentation.
    Or is there? Well, yes, actually there are two uses for argumentation. We are not going to contradict the beliefs prescribed by our Identity; but we might deepen our concepts and interpretations with the reasoning provided by some external party. There is also a rare, desirable further possibility: that a discussion is going to create a new, vital Identity with the “enemy” that contradicts our beliefs and by which we are going to create a new order of possibilities. Sometimes it happens.
    Excuse me for my poor English.

    Like

Leave a comment