By Eric Schwitzgebel

Consider cases in which a person sincerely endorses some proposition (“women are just as smart as men”, “family is more important than work”, “the working poor deserve as much respect as the financially well off”), but often behaves in ways that fail to fit with that sincerely endorsed proposition (typically treats individual women as dumb, consistently prioritizes work time over family, sees nothing wrong in his or others’ disrespectful behavior toward the working poor). Call such cases “dissonant cases” of belief. Intellectualism is the view that in dissonant cases the person genuinely believes the sincerely endorsed proposition, even if she fails to live accordingly. Broad-based views, in contrast, treat belief as a matter of how you steer your way through the world generally.

Dissonant cases of belief are, I think, “antecedently unclear cases” of the sort I discussed in this post on pragmatic metaphysics. The philosophical concept of belief is sufficiently vague or open-textured that we can choose whether to embrace an account of belief that counts dissonant cases as cases of belief, as intellectualism would do, or whether instead to embrace an account that counts them as cases of failure to believe or as in-between cases that aren’t quite classifiable either as believing or as failing to believe.

I offer the following pragmatic grounds for rejecting intellectualism in favor of a broad-based view. My argument has a trunk and three branches.

——————————————–

The trunk argument.

Belief is one of the most central and important concepts in all of philosophy. It is central to philosophy of mind: Belief is the most commonly discussed of the “propositional attitudes”. It is central to philosophy of action, where it’s standard to regard actions as arising from the interaction of beliefs, desires, and intentions. It is central to epistemology, much of which concerns the conditions under which beliefs are justified or count as knowledge. A concept this important to philosophical thinking should be reserved for the most important thing in the vicinity that can plausibly answer to it. The most important thing in the vicinity is not our patterns of intellectual endorsement. It is our overall patterns of action and reaction. What we say matters, but what we do in general, how we live our lives through the world — that matters even more.

Consider a case of implicit classism. Daniel, for example, sincerely says that the working poor deserve equal respect, but in fact for the most part he treats them disrespectfully and doesn’t find it jarring when others do so. If we, as philosophers, choose describe Daniel as believing what he intellectually endorses, then we implicitly convey the idea that Daniel’s patterns of intellectual endorsement are what matter most to philosophy: Daniel has the attitude that stands at the center of so much of epistemology, philosophy of action, and philosophy of mind. If we instead describe Daniel as a mixed-up, in-betweenish, or even failing to believe what he intellectually endorses, we do not implicitly convey that intellectualist idea.


Branch 1.

Too intellectualist a view invites us to adopt noxiously comfortable opinions about ourselves. Suppose our implicit classist Daniel asks himself, “Do I believe that the working poor deserve equal respect?” He notices that he is inclined sincerely to judge that they deserve equal respect. Embracing intellectualism about belief, he concludes that he does believe they deserve equal respect. He can say to himself, then, that he has the attitude that philosophers care about most – belief. Maybe he lacks something else. He lacks “alief” maybe, or the right habits, or something. But based on how philosophers usually talk, you’d think that’s kind of secondary. Daniel can comfortably assume that he has the most important thing straightened out. But of course he doesn’t.

Intellectualist philosophers can deny that Daniel does have the most important thing straightened out. They can say that how Daniel treats people matters more than what he intellectually endorses. But if so, their choice of language mismatches their priorities. If they want to say that the central issue of concern in philosophy is, or should be, how you act in general, then the most effective way to encourage others to join them in that thought is to build the importance of one’s general patterns of action right into the foundational terms of the discipline.

Branch 2.

Too intellectualist a view hides our splintering dispositions. Here’s another, maybe deeper, reason Daniel might find himself too comfortable: He might not even think to look at his overall patterns of behavior in evaluating what his attitude is toward the working poor. In Branch 1, I assumed that Daniel knew that his spontaneous reactions were out of line, and he only devalued those spontaneous reactions, not thinking of them as central to the question of whether he believed. But how would he come to know that his spontaneous reactions are out of line? If he’s a somewhat reflective, self-critical person, he might just happen to notice that fact about himself. But an intellectualist view of the attitudes doesn’t encourage him to notice that about himself. It encourages Daniel, instead, to determine what his belief is by introspection of or reflection upon what he is disposed to sincerely say or accept.

In contrast, a broad-based view of belief encourages Daniel to cast his eye more widely in thinking about what he beliefs. In doing so, he might learn something important. The broad-based approach brings our non-intellectual side forward into view while the intellectualist approach tends to hide that non-intellectual side. Or at least it does so to the extent we are talking specifically about belief — which is of course a large part of what philosophers do in fact actually talk about in philosophy of mind, philosophy of action, and epistemology.

Another way in which intellectualism hides our splintering dispositions is this: Suppose Suleyma has the same intellectual inclinations as Daniel but unlike Daniel her whole dispositional structure is egalitarian. She really does, and quite thoroughly, have as much respect for the custodian as for the wealthy business-owner. An intellectualist approach treats Daniel and Suleyma as the same in any domain where what matters is what one believes. They both count as believers, so now let’s talk about how belief couples with desire to beget intentions, let’s talk about whether their beliefs are justified, let’s talk about what set of worlds makes their beliefs true — for all these purposes, they are modeled in the same way. The difference between them is obscured, unless additional effort is made to bring it to light.

You might think Daniel’s and Suleyma’s differences don’t matter too much. They’re worth hiding or eliding away or disregarding unless for some reason those differences become important. If that’s your view, then an intellectualist approach to belief is for you. If on the other hand, you think their differences are crucially important in a way that ought to disallow treating them as equivalent in matters of belief, then an intellectualist view is not for you. Of course, the differences matter for some purposes and not so much for other purposes. The question is whether on balance it’s better to put those differences in the foreground or to tuck them away as a nuance.

Branch 3.


Too intellectualist a view risks downgrading our responsibility. It’s a common idea in philosophy that we are responsible for our beliefs. We don’t choose our beliefs in any straightforward way, but if our beliefs don’t align with the best evidence available to us we are epistemically blameworthy for that failure of alignment. In contrast, our habits, spontaneous reactions, that sort of thing — those are not in our control, at least not directly, and we are less blameworthy for them. My true self, my “real” attitude, the being I most fundamentally am, the locus of my freedom and responsibility — that’s constituted by the aspects of myself that I consciously endorse upon reflection. You can see how the intellectualist view of belief fits nicely with this.

I think that view is almost exactly backwards. Our intellectual endorsements, when they don’t align with our lived behavior, count for little. They still count for something, but what matters more is how we spontaneously live our way through the world, how we actually treat the people we are with, the actual practical choices we make. That is the “real” us. And if Daniel says, however sincerely, that he is an egalitarian, but he doesn’t live that way, I don’t want to call him a straight-up egalitarian. I don’t want to excuse him by saying that his inegalitarian reactions are mere uncontrollable habit and not the real him. It’s easy to talk. It’s hard to change your life. I don’t want to let you off the hook for it in that way, and I don’t want to let myself off the hook. I don’t want to say that I really believe and I am somehow kind of alienated from all my unlovely habits and reactions. It’s more appropriately condemnatory to say that my attitude, my belief state, is actually pretty mixed up.

It’s hard to live up to all the wonderful values and aspirations we intellectually endorse. I am stunned by the breadth and diversity of our failures. What we sincerely say we believe about ourselves and the people around us and how we actually spontaneously react to people and what we actually choose and do — so often they are so far out of line with each other! So I think we’ve got to have quite a lot of forgiveness and sympathy for our failures. My empirical, normative, pragmatic conjecture is this: In an appropriate context of forgiveness and sympathy, the best way to frankly confront our regular failure to live up to our verbally espoused attitudes is to avoid placing intellectual endorsements too close to the center of philosophy.

[image source]

[Cross-posted at The Splintered Mind]

I will check comments here for a week. After that time, please comment at The Splintered Mind.

Posted in

10 responses to “Some Pragmatic Considerations Against Intellectualism about Belief”

  1. Anon Avatar
    Anon

    Hey Eric,
    So, I tend to agree that in pristine cases of the kind you describe, where some sincerely endorsed view has virtually no behavioral consequences, that the sincerely endorsed view is not a belief (whatever else it might be). But I also think such cases are likely to be extremely rare.
    In more realistic cases of implicit bias, there will be behavioral implications for both the sincerely endorsed view and for the disavowed view. The implicit racist might vote for policies that reduce violence against people of color, but might not socialize with POC or might treat them differently in social settings than they treat non-POC. The implicit sexist might support women-friendly policies in their workplace but might nevertheless rank particular women as less competent than particular men, even when both are equally qualified on some measure. What I think best describes these cases is deeply conflicting views (call them beliefs if you want). That is to say, the sum of one’s credence that P, where P is avowed, and the sum of one’s credence in not-P, where not-P is disavowed, will NOT sum to 1. And they won’t come close to summing to 1.
    On the view that one’s implicit and explicit views conflict, it is not the case that in some contexts one holds P and in other contexts one holds not-P. One can, in the very same context, hold both P and not-P, and both to a high degree. The reason for thinking this is so is that there are likely contexts in which one experiences and exhibits deeply polarizing responses about the very same situation, responses corresponding to each of one’s conflicting views. E.g., should you hire a particular job candidate, a woman of color? You want to promote a diverse workplace, but you find yourself scanning her CV for shortcomings, subjecting it to a degree of scrutiny other candidates don’t receive. You struggle to form an opinion of the candidate because of the simultaneous pull of both the avowed and disavowed views. Examples can be multiplied.

    Like

  2. Irene Fenswick Avatar
    Irene Fenswick

    People often fail and make mistakes, but that’s a part of the journey. One Japanese proverb says, “Fall seven times, Stand up eight.” Thank you for such a thought-provoking post.

    Like

  3. Eric Schwitzgebel Avatar

    Anon — I agree that purely pristine cases are a fiction, and I wouldn’t be inclined to think that a case like Daniel’s would be pristine. Maybe I should have made the point more explicitly, but I settled with saying “for the most part” in the post.
    Conflicting beliefs is one way to go with this, but I’m not sure what the advantage is of framing things in that way, compared to going broad-based as I prefer. One disadvantage: If someone asks, “Does Daniel believe P?”, you can simply say “yes” on the conflicting beliefs view. Then later, if someone asks “Does Daniel believe not-P?”, you can also simply say “yes”. [corrected from “no”] An outsider watching that might find that a bit odd. Better, I think, to refrain from simple yes-or-no and describe it as a mixed-up inbetweenish case (which I think is the best approach to the broad-based view).

    Like

  4. anon Avatar
    anon

    Hey Eric,
    So, I see your interpretationist, Dennett-ian view worry, “One disadvantage: If someone asks, “Does Daniel believe P?”, you can simply say “yes” on the conflicting beliefs view. Then later, if someone asks “Does Daniel believe not-P?”, you can simply say “no”. An outsider watching that might find that a bit odd.”
    So I must say, I independently have reasons to reject anything that remotely smacks of interpretationism, so I’m not terribly worried about this. I think it is just a confusion to think that which mental states a person has is a function of what a relevant evaluator would attribute to them, on the assumption that person is rational. I can’t justify that stance here but just wish to note that it is not universal among philosophers that interpretation matters.
    But even bracketing aside my broad worries about interpretationism, I just don’t buy that “an outsider” would find it odd to attribute contradictory beliefs to the same person. That it would be odd strikes me as a bit of a philosopher’s fancy, though some x-phi might better shed light on whether my suspicion is correct. But we do have a bit of anecdotal evidence: people often reply to questions of the form “Do you think P?” with “I do, and I don’t.” Sometimes what they mean by this is that on one way of disambiguating P, they do hold P, whereas on another way of disambiguating P, they don’t hold P. But sometimes they just mean that they hold simultaneous, deeply conflicting beliefs about P. (again, the conflict is “deep” in the sense that the sum of their credences in P and not-P are far greater than 1). It seems to me that people who are honest with themselves and others about their own psychology often recognize and report deeply conflicting beliefs. And this isn’t even to touch on deep conflicts between conscious & non-conscious beliefs, which I don’t see any (much?) reason whatsoever from rational interpretation to deny.
    I will say that I think philosophers think it’s odd to say, “I believe P and I believe not-P,” but I am of the view that this reflects the hyper-rationalist tendencies of philosophers who, for whatever reason, seem especially inclined to deny the deep contradictions that lie within their own psychology. In effect, I think philosophers are idiosyncratic about this issue and perhaps less equipped than others to speak to it. (and I don’t think it’s because philosophers are more rational than others. I think philosophers think they’re more rational than others).
    Finally, cognitive dissonance can work to change beliefs — it often doesn’t and can fail for myriad reasons–but it can change beliefs. What appears to be necessary for this change is a recognition on the part of the subject that two of her views are in conflict (this conflict is sometimes illusory, as in counter-attitudinal tasks). If we have a mechanism whose function is to track and root out conflicts, why would we have such a mechanism if not because we sometimes have such conflicts? Surely this is a very strong reason to think beliefs can and often do conflict, even deeply and even at the level of conscious awareness.

    Like

  5. Eric Schwitzgebel Avatar

    Thanks for pushing on this so helpfully, Anon! I like to analogize to personality traits. Imagine someone who is socially courageous in risking her financial welfare and social standing to stand up for what’s right but who lacks physical courage in the face of violent threats. Is she courageous? “Well, yes and no” could be a reasonable way of beginning — though not a very helpful place to stop. She is courageous in some respects but not in others. One way of thinking about this is to think of courage as involving a suite of dispositions. She has some of the dispositions, while she lacks others. It’s not that she has two ontologically real personality switches inside, one switched to “courageous” and another switched to “uncourageous” and they are buzzing against each other in conflict.
    So similarly for beliefs in my view. In a realistic Daniel-like case, he matches the dispositional profile of an egalitarian in some respects but not in others. No need to think there’s an ontologically real “P” and “not-P” in his belief box dueling it out.
    The normativity of rationality on my picture of attitudes is something I’m hoping to work through in more detail soon. For now, let me say that there is some cognitive pressure to conform to belief stereotypes and that that type of cognitive pressure can be part of the picture in explaining how cognitive dissonance phenomena work.

    Like

  6. anon Avatar
    anon

    Hey Eric,
    This is helpful, thank you. I suspect I will continue to disagree with you on some particular cases (i.e., that I will think some involve deeply conflicting beliefs, where you think those are cases of in-between belief), but the alternative ‘in between’ picture you’re suggesting and have already defended elsewhere is a fruitful one. And it may be that some range of cases involves in-between beliefs, whereas others involve deeply conflicting beliefs. How to distinguish between or to adjudicate between these two models is an interesting question.
    I look forward to hearing more about your model of cognitive dissonance as you develop it. I guess the suggestion is that dispositions change when dissonance results in an apparent shift in beliefs? Then, on your view, do token mental states shift from in-between beliefs to determinate beliefs? Anyway, much to think about.

    Like

  7. akreider Avatar
    akreider

    Thank you for the thought-provoking post.
    To defend intellectualism a bit, it seems that in order for individual agents to change their dispositions in the right way, they need to start with belief. At least part of the reason why we are responsible for our bad habits is that we can correctly believe that they are in fact bad and in need changing (or we can fail to have the belief, as the case may be). The dissonance is thus a necessity, and belief deserves its place of privilege.
    Further, the Daniel case may not be a case of conflicting or mixed up beliefs, but of simply having a false belief – that his actions are not classist. This could be because he’s not aware of many ways of being classist, or perhaps he’s not asked himself about the nature of his actions. The case to examine would then be the one in which it is pointed out to Daniel that his actions do not line up with his other beliefs, and look the response. Does he shrug his shoulders, or does he look to change his habits?
    The challenge then is not to move away from intellectualism, but towards encouraging a more thorough-going intellectual assessment – of our actions as well as of more abstract moral principles. But such an assessment still seems to go through belief.

    Like

  8. Eric Schwitzgebel Avatar

    Thanks! On what you say, yes there’s a shift with dissonance — although talk of “token mental states” sounds a little too suggestive of a kind of realism about mental states as countable entities for me to be entirely comfortable with that phrasing.

    Like

  9. Eric Schwitzgebel Avatar

    That does seem to me like the best direction to develop an intellectualist view, akreider. But I’d still push back against intellectualism. On your first paragraph, for example, the pushback toward change might start with an emotional reaction, for example, with the intellectual stuff tagging along behind it. On your second paragraph, I suspect the case could be developed in either direction, and that a psychologically realistic version would tend to be a compromise between the two directions.

    Like

Leave a comment