We might soon be creating monsters, so we’d better figure out our duties to them.

Robert Nozick’s Utility Monster derives 100 units of pleasure from each cookie she eats. Normal people derive only 1 unit of pleasure. So if our aim is to maximize world happiness, we should give all our cookies to the monster. Lots of people would lose out on a little bit of pleasure, but the Utility Monster would be really happy!

Of course this argument generalizes beyond cookies. If there were a being in the world vastly more capable of pleasure and pain than are ordinary human beings, then on simple versions of happiness-maximizing utilitarian ethics, the rest of us ought to immiserate ourselves to push it up to superhuman pinnacles of joy.

Now, if artificial consciousness is possible, then maybe it will turn out that we can create Utility Monsters on our hard drives. (Maybe this is what happens in R. Scott Bakker’s and my story Reinstalling Eden.)

Two questions arise:


(1.) Should we work to create artificially conscious beings who are capable of superhuman heights of pleasure? On the face of it, it seems like a good thing to do, to bring beings capable of great pleasure into the world! On the other hand, maybe we have no general obligation to bring happy beings into the world. (Compare: Many people think we have no obligation to increase the number of human children even if we think they would be happy.)

(2.) If we do create such beings, ought we immiserate ourselves for their happiness? It seems unintuitive to say that we should, but I can also imagine a perspective on which it makes sense to sacrifice ourselves for superhumanly great descendants.

The Utility Monster can be crafted in different ways, possibly generating different answers to (1) and (2). For example, maybe simple sensory pleasure (a superhumanly orgasmic delight in cookies) wouldn’t be enough to compel either (1) creation or (2) if creation sacrifice. But maybe “higher” pleasures, such as great aesthetic appreciation or great intellectual insight, would. Indeed, if artificial intelligence plays out right, then maybe whatever it is about us that we think gives our lives value, we can artificially duplicate it a hundredfold inside machines of the right type (maybe biological machines, if digital computers won’t do).

You might think, as Nozick did, and as Kantian critics of utilitarianism sometimes do, that we can dodge utility monster concerns by focusing on the rights of individuals. Even if the Monster would get 100 times as much pleasure from my cookie as I would, it’s my cookie; I have a right to it and no moral obligation to give it to him.

But similar issues arise if we allow Fission/Fusion Monsters. If we say “one conscious intelligence, one vote”, then what happens when I create a hundred million conscious intelligences in my computer? If we say “one unemployed consciousness, one cookie from the dole”, then what happens if my Fission/Fusion Monster splits into a hundred million separate individual unemployed conscious beings, collects its cookies, and then in the next tax year merges back into a single cookie-rich being? A Fission/Fusion Monster could divide at will into many separate individuals, each with a separate claim to rights and privileges as an individual; and then whenever convenient, if the group so chose (or alternatively via some external trigger), fuse back together into a single massively complex individual with first-person memories from all its predecessors.

(See also: Our Possible Imminent Divinity.)

[image source]

[Cross-posted at The Splintered Mind]

Posted in ,

10 responses to “Our Moral Duties to Monsters”

  1. Carlo Ierna Avatar

    Didn’t utilitarianism plead for “the greatest happiness of the greatest number”? Making one single cookie monster inordinately happy would maximize the former, but certainly not the latter. The swarm is no problem at all, since it would ex definitione be just as happy as a hive mind with one cookie as is would have been when split up (i.e. it can make itself (it’s selves) happy for free), not to mention the possibility of virtual cookies. I think the example is extremely misleading and falls prey to the same kind of problems as the “Bill Gates walks into a bar and everybody’s net worth goes off the charts”. Creating an exstactic being leaves the rest of the world exactly as it is (procreating, however, does not make just the child happy …) To avoid this, perhaps it is better to consider only the kind of happiness that replaces unhappiness instead of just averaging it out.

    Like

  2. Catarina Dutilh Novaes Avatar

    (Tangential comment) Your Fission/Fusion Monster reminds me of that classic Star Trek episode where two characters are merged into one being, only to split again at the end of the episode. The most philosophically interesting Star Trek episode that I can recall.

    Like

  3. Eric Schwitzgebel Avatar

    Thanks for the comments, folks!
    Carlo: There are different variants, of course, of all these positions. All I really want to suggest here is that the issue is a serious one to consider if artificial consciousness becomes possible, and that some of our favorite views might have surprising consequences.
    Catarina: I don’t recall that episode! By “classic” do you mean Kirk era?

    Like

  4. Christy Mag Uidhir Avatar

    Catarina I believe is referring to an episode of Star Trek: Voyager within which two characters (Tuvok & Neelix) as the result of a transporter accident are merged together to form one being (named Tuvix). The great part about the episode is that Tuvix strikes everyone as being in all meaningful ways a much better person than either Tuvok or Neelix (both of which were pretty insufferable). Much to the dismay of the crew (and the viewing audience), Captain Janeway decides in the end that the right thing to do is to reverse the process and thereby kill Tuvix so that Tuvok and Neelix may live again.

    Like

  5. Alan White Avatar
    Alan White

    “The Enemy Within” (Season 1 of the original series, episode 5–yeah I watched it when broadcast as a kid), Kirk is split into two good/bad Kirks when the transporter malfunctions, but in the end of the show, is reunited by a reversal of that process. Parfitian 60s TV. Could be there is a ST second gen with Pickard but I can’t think of it.

    Like

  6. John Avatar
    John

    If I’m not mistaken, Catarina is referring to an episode from Star Trek: Voyager in which there is a transporter malfunction. Tuvok and Neelix were fused into one being who called itself ‘Tuvix’. Definitely a great episode.

    Like

  7. Kimberly Avatar
    Kimberly

    That’s an episode of ST:Voyager, “Tuvix”. A great example for messing with intuitions about personal identity.

    Like

  8. Eric Schwitzgebel Avatar

    Thanks for those suggestions, all! It will give me a good excuse to go watch some Star Trek — used to be a big fan, but it has been awhile.

    Like

  9. Nathan Avatar
    Nathan

    I firmly believe we have no moral duties to context-shifting expressions.

    Like

  10. Catarina Dutilh Novaes Avatar

    Yes, I was referring to “Tuvix” indeed. It is not a true ‘classic’ in the sense that it’s part of the Voyager series, not of the original Cap. Kirk Star Trek, but I think it’s great for all kinds of reason, including metaphysical and ethical reflection (hence the connection with Eric’s post).

    Like

Leave a reply to John Cancel reply