by Eric Schwitzgebel

In 1% Skepticism, I suggest that it’s reasonable to have about a 1% credence that some radically skeptical scenario holds (e.g., this is a dream or we’re in a short-term sim), sometimes making decisions that we wouldn’t otherwise make based upon those small possibilities (e.g., deciding to try to fly, or choosing to read a book rather than weed when one is otherwise right on the cusp).

But what about extremely remote possibilities with extremely large payouts? Maybe it’s reasonable to have a one in 10^50 credence in the existence of a deity who would give me at least 10^50 lifetimes’ worth of pleasure if I decided to raise my arms above my head right now. One in 10^50 is a very low credence, after all! But given the huge payout, if I then straightforwardly apply the expected value calculus, such remote possibilities might generally drive my decision making. That doesn’t seem right!

I see three ways to insulate my decisions from such remote possibilities without having to zero out those possibilities.

First, symmetry:

My credences about extremely remote possibilities appear to be approximately symmetrical and canceling. In general, I’m not inclined to think that my prospects will be particularly better or worse due to their influence on extremely unlikely deities, considered as a group, if I raise my arms than if I do not. More specificially, I can imagine a variety of unlikely deities who punish and reward actions in complementary ways — one punishing what the other rewards and vice versa. (Similarly for other remote possibilities of huge benefit or suffering, e.g., happening to rise to an infinite Elysium if I step right rather than left.) This indifference among the specifics is partly guided by my general sense that extremely remote possibilities of this sort don’t greatly diminish or enhance the expected value of such actions. I see no reason not to be guided by that general sense — no argumentative pressure to take such asymmetries seriously in the way that there is some argumentative pressure to take dream doubt seriously.

Second, diminishing returns:


Bernard Williams famously thought that extreme longevity would be a tedious thing. I tend to agree instead with John Fischer that extreme longevity needn’t be so bad. But it’s by no means clear that 10^20 years of bliss is 10^20 times more choiceworthy than a single year of bliss. (One issue: If I achieve that bliss by repeating similar experiences over and over, forgetting that I have done so, then this is a goldfish-pool case, and it seems reasonable not to think of goldfish-pool cases as additively choiceworthy; alternatively, if I remember all 10^20 years, then I seem to have become something radically different in cognitive function than I presently am, so I might be choosing my extinction.) Similarly for bad outcomes and for extreme but instantaneous outcomes. Choiceworthiness might be very far from linear with temporal bliss-extension for such magnitudes. And as long as one’s credence in remote outcomes declines sharply enough to offset increasing choiceworthiness in the outcomes, then extremely remote possibilities will not be action-guiding: a one in 10^50 credence of a utility of +/- 10^30 is negligible.

Third, loss aversion:

I’m loss averse rather than risk neutral. I’ll take a bit of a risk to avoid a sure or almost-sure loss. And my life as I think it is, given non-skeptical realism, is the reference point from which I determine what counts as a loss. If I somehow arrived at a one in 10^50 credence in a deity who would give me 10^50 lifetimes of pleasure if I avoided chocolate for the rest of my life (or alternatively, a deity who would give me 10^50 units of pain if I didn’t avoid chocolate for the rest of my life), and if there were no countervailing considerations or symmetrical chocolate-rewarding deities, then on a risk-neutral utility function, it might be rational for me to forego chocolate evermore. But foregoing chocolate would be a loss relative to my reference point; and since I’m loss averse rather than risk neutral, I might be willing to forego the possible gain (or risk the further loss) so as to avoid the almost-certain loss of life-long chocolate pleasure. Similarly, I might reasonably decline a gamble with a 99.99999% chance of death and a 0.00001% chance of 10^100 lifetimes’ worth of pleasure, even bracketing diminishing returns. I might even reasonably decide that at some level of improbability — one in 10^50? — no finite positive or negative outcome could lead me to take a substantial almost-certain loss. And if the time and cognitive effort of sweating over decisions of this sort itself counts as a sufficient loss, then I can simply disregard any possibility where my credence is below that threshold.

These considerations synergize: the more symmetry and the more diminishing returns, the easier it is for loss aversion to inspire disregard. Decisions at credence one in 10^50 are one thing, decisions at credence 0.1% quite another.

[Cross-posted at the Splintered Mind]

Posted in

6 responses to “How to Disregard Extremely Remote Possibilities”

  1. David Wallace Avatar
    David Wallace

    Quite apart from the extremely-remote-possibility issue, the expected-utility rule isn’t well-defined unless we assume utilities are restricted to lie in some bounded region (i.e., we have a very strong form of diminishing returns). If for every real number there’s a reward with a utility equal to that number, then the utility of
    2 utils with probability 1/2, 4 utils with probability 1/4, 8 utils with probability 1/8, …
    isn’t defined. As I recall, Savage derives boundedness of utility as a theorem in his representation theory for decision theory.

    Like

  2. Eric Schwitzgebel Avatar

    David: I’m inclined to agree, and prefer to work with bounded utilities. But if I can make it work for unbounded utilities too, then bonus! I won’t be too worried if my approach fails in St. Petersburg cases, as long as it works in bounded cases.

    Like

  3. Owen Schaefer Avatar
    Owen Schaefer

    The second and the third responses could be obviated with a slightly different challenge: instead of raising your arms giving you some unfathomable amount of utility, raising your arms saves a massive number of people from significant, if more realistic, suffering. Then, the question is whether you’re obligated/have really strong reason to raise your arms. Since there’s not diminishing returns from saving more people from suffering, and it can be framed as avoiding a loss for others rather than a again, the latter two responses don’t seem to apply.
    The symmetry response still works, of course – at least in the way the problem is framed. But I think you might still be vulnerable to Pascal’s Mugging (http://www.nickbostrom.com/papers/pascal.pdf), where an agent’s claims to be able to perform the relevant miracles create a meaningful asymmetry. (you may just discount extremely small probabilities, though, something that’s sorta implied by your comments at the end of the symmetry section)

    Like

  4. Eric Schwitzgebel Avatar

    Thanks for that very interesting comment, Owen! I’d actually already been thinking about exactly those two cases as potential problems, so we’re on the same wavelength.
    On the ethical case: It’s somehwat harder to make the diminishing returns argument for this case, but I do think it can still be argued, at least. It’s not clear that killing two people is twice as bad as killing one or that killing a billion is a million times as bad as killing a thousand — though my intuitions here are pretty shaky. Similarly for creation or reward: It’s not clear that creating or rewarding a billion is a million times more choiceworthy than creating a thousand. One way of thinking that greases my intuitions a bit toward this conclusion is something in the ballpark of goldfish-pool / eternal return / identity of indiscernibles thinking. At some magnitude, one is likely just to be creating lots of duplicates of (almost?) exactly the same thing.
    On Pascal’s mugging: I agree that in this case there is some reason not to accept symmetry. I could justify reestablishing symmetry by thinking that it’s just as likely to be a prank with as negative utils if I fall for it — or perhaps thinking that whatever credence I give to the mugger’s promise I should give many orders of magnitude more credence (though still very small credence) to this whole series of events being some practical joke by beings who might punish or reward anything. Still, it seems a little forced to insist that symmetry will be precisely restored — so it’s nice that I have my second and third arguments still to rely on!

    Like

  5. Dan Dennis Avatar
    Dan Dennis

    The literature on Pascal’s Wager discusses a lot of these sorts of issues. Probably the best paper on this topic is Alan Hajek’s paper ‘Waging war on Pascal’s wager’ in Phil Review 2003. Available here
    http://philrsss.anu.edu.au/people-defaults/alanh/papers/waging_war_galleys.pdf
    I think it was in the annual review of ten best papers that year too, so worth a look…

    Like

  6. Eric Schwitzgebel Avatar

    Thanks for the tip on that, Dan! Yes, that’s a cool paper. Nick Bostrom’s “Pascal’s Mugging” is also pretty interesting in this connection.

    Like

Leave a comment