Friday, July 20, 2012

The Utilitarian Boogeymen

This is my fourth in a series of posts on utilitarianism.  The first is an introduction.  The second is a post on average vs. total utilitarianism.  The third is a post on act vs. rule, classical vs. negative, and hedonistic vs. high/low pleasure utilitarianism.

____________________________________



This post is a response to various objections people often claim to utilitarianism.


The Repugnant Conclusion


The Repugnant Conclusion (or mere addition pradox) is a thought experiment designed by Derek Parfit, mean as a critique of total utilitarianism.  The thought experiment goes roughly as such:  Suppose that there are 1,000 people on Earth, each with happiness 2.  Well, a total utilitarian would prefer that there be 10,000 people each with happiness 1, or even better 100,000 people each with happiness 0.5, etc., leading eventually to lots and lots and lots of people each with almost no utility: in fact, for any arbitrarily small (but positive) utility U, there is a number of people N such that one would prefer N people at U utility to a given situation.  This, says Parfit, is repugnant.

I would argue, however, that this potential earth--with e.g 1,000,000,000 people, each with 0.015 utils of happiness, is far from a dystopia.  First of all, it is important to realize that this conclusion only holds as long as U--the happiness per person--remains positive; when U becomes negative, adding more people just decreases total utility.  So, when imagining this potential planet it's important not to think of trillions of people being tortured; instead it's trillions of people living marginally good lives--lives worth living.

Still many people have the intuition that 1,000,000,000,000, people at happiness 2 (Option A) is better than 1,000,000,000,000,000 people with  happiness 1 (Option B).  But I posit that this comes not from a flaw in total utilitarianism, but instead from a flaw in human intuition.  You remember those really big numbers with lots of zeros that I listed earlier in this paragraph?  What were they?  Many of you probably didn't even bother counting the zeros, instead just registering them as "a really big number" and "another really big number, which I guess kind of has to be bigger than the first really big number for Sam's point to make sense, so it probably is."  In fact, English doesn't even really have a good word for the second number ("quadrillion" sounds like the kind of think a ten year old would invent to impress a friend).  The point--a point that has been supported by research--is that humans don't fundamentally understand numbers above about four.  If I show you two dots you know there are two; you know there are exactly two, and that that's twice one.  If I show you thirteen dots, you have to count them.

And so when presented with Options A and B from above, people really read them as (A): some big number of people with happiness 2, and (B): another really big number of people with  happiness 1.  We don't really know how to handle the big numbers--a quadrillion is just another big number, kind of like ten thousand, or eighteen.  And so we mentally skip over them.  But 2, and 1: those we understand, and we understand that 2 is twice as big as one, and that if you're offered the choice between 2 and 1, 2 is better.  And so we're inclined to prefer Option A: because we fundamentally don't understand the fact that in option B, one thousand times as many people are given the chance to live.  Those are entire families, societies, countries, that only get the chance to exist if you pick option B; and by construction of the thought experiment, they want to exist, and will have meaningful existences, even if they're not as meaningful on a per-capita basis as in option A.

Fundamentally, even though the human mind is really bad at understanding it, 1,000,000,000,000,000 is a lot bigger than 1,000,000,000,000; in fact the difference dwarfs the difference between things we do understand, like the winning percentages of the Yankees versus that of the Royals, or the numbers 2 and 1.  And who are we to deny existence to those 900,000,000,000,000 people because we're too lazy to count the zeros?

I have one more quibble with Parfit's presentation of the thought experiment: the name.  Naming a thought experiment "The Repugnant Conclusion" is kind of like naming a bill "The Patriot Act" so that you can call anyone who votes against it unpatriotic.  I'm all in favor of being candid about your thoughts, but do so in analysis and discussion, not in naming, because a name is something that everyone is obliged to agree with you on.

By the way, I'm naming the above thought experiment the "if-you-disagree-with-Sam-Bankman-Fried-on-this-then-you-probably-have-a-crush-on-Lord-Voldemort conclusion", or "Sam Is Great" for short, and would appreciate it if it were referred to as such.


The Utility Monster



The Utility Monster is the other of the two famous anti-utilitarianism thought experiments.  There are a few different version of it running around.  All of the versions revolve around a hypothetical creature known as the Utility Monster.  In some versions it gains immense amounts of pleasure from torturing people--more pleasure than the pain they feel--and in others it simply gains more pleasure than others from consuming resources, e.g. food, energy, etc., and so the utilitarian solution would be to allow it all the resources, while the rest of humanity either withers off or continues a greatly diminished existence.

There's something that seems intuitively wrong about giving all societal resources to one utility monster, but what's going on here is really the same thing as the intuition behind negative utilitarianism:  because it's so much easier to make someone feel really bad (e.g. torture them) than really good, no normal person would come close to being able to gain from consuming the world's resources anything close to the losses associated with seven billion tortured people.  In fact if you took a random person and gave them unlimited resources, it's unlikely they'd be able to make up for a single tortured person (especially given the lack of basically any happiness gained from income above about $75,000).  In order to make up an additional factor of seven billion, the utility monster in question would have to be a creature that behaves fundamentally differently from any creature we've ever encountered.  In other words, in order for utilitarianism to disagree with out intuitions and endorse a utility monster, the situation would have to be way outside the set of situations we have ever encountered.

And, in fact, it would be a little bit weird if the optimal decisions didn't disagree with our moral intuitions in weird-ass situations, because our intuitions are not meant to deal with them.  This is a phenomenon frequently seen in physics: when you get to extreme situations outside of the size/speed ranges humans generally interact with, our intuitions are wrong.  It seems really weird that if you travel the speed of light you don't age relative to the rest of the universe, but that's because our intuitions were developed for 10 mile per hour situations, not for the speed of light.  It seems really weird that objects do not have well defined positions or velocities but are instead complex-valued probability distributions floating through space, but that's because our intuitions weren't developed to deal with electrons.  It seems really weird that snapping a piece of plutonium into smaller pieces can destroy a city, but it can.

Long story short, utility monsters are generally really bad bargains even for a utilitarian, and trying to work around it is just double-counting this effect (like negative utilitarianism double counts our intuition that it's harder to be really happy than really sad).  And when a utility monster really is the utilitarian option, you're in a really bizarre situation that we shouldn't expect out intuitions to work in any way.



I Don Care About Morality



One of the more common responses I get to utilitarianism is some combination of "well screw morality", "you can't define happiness", and "the universe has no preferred morality".  And all of these are, in a sense, true: in the end it's probably not possible to truly define happiness, and the universe does not have a preferred morality.  And all of these things are fine things to use to reject utilitarianism, as long as you wouldn't have any objections to any possible universes, including ones in which you and/or other people are being tortured en mass for no good reason.  But as soon as you say that it's "wrong" to steal or torture or murder, you've accepted morality; as soon as you say that it's bad to be racist or sexist or you have political positions you've accepted morality.  You can't have it both ways.



Please, sir, can I have some more?



Perhaps the most common response I get to utilitarianism, however, is a combination of two statements.  The first, roughly speaking, is "then why aren't you just high all the time?"; the second, roughly, is "then why aren't you in Africa helping out poor kids?"  Yes, these two statements are contradictory; and yes, I often hear them together, at the same time, from the same people.  I'm going to ignore the first statement because it's just wrong, and instead  focus on the second.

Imagine that you lived in a universe where the marginal utility of your 5th hour of time after work was greater volunteering at a soup kitchen than hanging out with friends.  (It's probably not too big a stretch of your imagination.)  Well, it's probably also true of the fourth hour after work.  And the third...  At some point you might start to loose sanity from working/volunteering to much and/or your productivity might significantly decline, but until that point it seems that utilitarianism says that if you've decided that some of your time--or money--can be better spent on others than on yourself, well, then, why not more of it?  Why not all of it?

I'm going to write much more on this later, but for now I have two points.  The first is that this, truly, is why people aren't utilitarians: in the end what scares people most about utilitarianism is that it encourages selflessness.  And the second point is that it would be really weird if a philosophy held that selfishness was good for the wold.  Yes, of course dedicating some of your life to making the world a better place is good, and of course donating more is better.  Defecting on prisoner's dilemmas is bad, and cooperating is good.

I don't, of course, believe that everyone does or ever really will act totally selflessly and totally in the interest of the world, but to argue that they shouldn't is sacrificing an almost tautologically true statement in an effort to reclaim the possibility that you're acting "well enough".

Next up: taking issue with decision theories.




9 comments:

  1. First, let me say I really appreciated this, and appreciated most of all that you chose to call it "Utilitarian Boogeymen". This is also pretty clearly written and commonsensical, which I apprechiate.

    ~

    """"But as soon as you say that it's "wrong" to steal or torture or murder, you've accepted morality; as soon as you say that it's bad to be racist or sexist or you have political positions you've accepted morality. You can't have it both ways.""""

    I disagree. I think it is coherent to say that the torturing of me is bad for me, and I would prefer not to be tortured, even while rejecting morality. It would also be coherent to put forth a standard in which my utility (and my utility alone) should be maximized. Both of these would be odd and non-moral, but coherent.

    I also suspect you indirectly conflate morality in general with utilitarianism.

    ~

    """"I'm going to ignore the first statement because it's just wrong, and instead focus on the second.""""

    I think you miss a really interesting conversation about what exactly "happiness" is and what a utilitarian (especially a hedonistic one) cares to maximize. Especially think of wireheading and experience machines, another utilitarian staple!

    ReplyDelete
    Replies
    1. Thanks!

      Re "wrong": Perhaps I should have been more clear here. Selfish utilitarianism (trying to maximize your own utility) is almost consistent (it has issues of self-definition, but is otherwise good); I was more responding to people who attempt to reject all morality, but still feel passionately about e.g. political issues that won't affect them.

      Re "the first statement": it's true that in the end perhaps some sort of wireheading is a utilitarian utopia, but right now if you're a utilitarian it's not what you should be doing with your life; given the distance between the world and what it could be, utilitarianism would much prefer we devote our lives to making the world a better place than to personally feeling pleasure.

      Delete
    2. Even if you are perfectly fine with wireheading (all the better to engage with someone I presently disagree!), I'd love to hear your responses to people (like me!) who think it truly repugnant and even dystopian.

      Delete
    3. Hm. So, I am, in principle, fine with wireheading. I guess that my question for you is: why are you against it? Try to respond without using arguments that are clearly a product of the particular, path-dependent state of human society or your particular position within it.

      Delete
    4. I think it's a bit disingenuous to discount the particular, path-dependent state of human society, because without that path, we may not even be applying utilitarianism. A different path might be using the copper-metal-maximization-ism mentioned in the other comment, and they wouldn't be meta-ethically wrong.

      I've decided that I'm not really able to yet summarize my thoughts on wireheading adequately, but a good start is the Felicifia conversation starting here, and continuing for additional comments.

      Delete
  2. So I was thinking about this, and I think a lot of the repugnance of the repugnant conclusion is because people tend to default to "semi-selfish" utilitarianism. Let me start with an example.

    Let's say I'm walking down the street one day and an Extradimensional entity shows up and gives me a choice: Either I can have $10, or Sam can have $10. If Sam gets the money, he won't know it came from me, so I won't get any of it, or anything in exchange. In this case, I'm going to take the money for myself. This is because I value my marginal differential of utility more than Sam's.

    But now, let's say the choice is different. Either I can have $10, or Sam can have $1000. I value my own differential utility the most, but Sam is a pretty cool guy, and so I probably value his differential utility at at least more than a hundredth of my own. Because of this, I probably choose to let Sam have the $1000 (hey look, it's weighting, and this time it makes sense).

    Let's change the choice again. Either I can have $10, or Joe Random I don't know can get $1,000. I don't know Joe Random. Maybe he's a cool guy, but I don't really care about him anyway, I care about my $10. So I take my $10, because Joe Random's utility differential is valued at zero to me.

    This is intuitive "morality" to me. I matter lots, and then my friends mostly matter some, and then everyone else doesn't matter at all. The Repugnant Conclusion is Repugnant because a person tends to assume he and all his friends and acquaintances (and even countrymen, for some) will exist in either case, and in the "Repugnant" case they all have one utility instead of two.

    If one wants to use terms like "Good" and "Evil", Semi-Selfish Utilitarianism is probably more Evil than Equal Utilitarianism. Maybe your level of Evil can even be measured by the steepness of your utility differential valuing peak!

    ReplyDelete
    Replies
    1. I totally agree with this. Sometime soon I'll write on who "counts" (e.g. how about people who aren't alive yet?), but this is a good summary of how most people think about it. And, of course, it's not a safe assumption that your friends would all exist in the smaller, less "repugnant" universe, but as you say people all assume that they would, because what sort of sick universe would choose your friends to not exist?

      Delete
  3. Serious question: What's the basis for assuming that happiness is more important than copper metal production?

    ReplyDelete
    Replies
    1. Ah, yeah, should have made that another boogeyman (though some of this is in the "I don't care about morality" section). Basic answer: the universe does not have a preferred morality, and in the end that means it's impossible to prove that any morality is right. Many are logically inconsistent, but by replacing happiness with some other quantity in utilitarianism you're kind of piggy-backing on the reasonably well defined nature of it, and so yeah, your copper morality is going to be logically consistent (as long as you make a few correct decisions in defining it).

      Anyway, this is just a long way of saying that in the end it's impossible to prove any morality is better than any other one because in the end the word "better", just like the words "should" and "right" and "wrong", need to be given some framework before they're meaningful.

      The best I can do is to take the function whose domain is the set of people and whose output is the thing that is best for them--and show reasonably conclusively that this function is the thing whose aggregation we want to maximize--and hope that you'll agree with me that this function is the happiness function, and not, say, the copper metal production function.

      Delete