Sunday, July 15, 2012

Utilitarianism part 2: Total, Average, and Linearity

This is the second in a series of posts about utilitarianism.  The first is here.  Before I get started, though, there's one definition I'd like to make: a philosophy is an algorithm that orders all possible universes from best to worst; the ordering has to be transitive, reflexive, and any two universes have to be comparable.

______________________________________




One of the most contentious issues in intra-utilitarianism debates is how to aggregate utility between different people.  For the sake of this post I will put off discussions of negative vs. classical utilitarianism, high/low pleasure vs. hedonistic utilitarianism, and other distinctions within the measurement of one individual's utility; I'll discuss those in later posts.  I'm also going to postpone discussion of the repugnant conclusion to a later article, though it is relevant to this one.  So, for now assume we have some (as of now only relative) utility function h(p) which takes one person and spits out their utility, and we want to find some function H(w) which takes the world and spits out a utility of the world.

There are two canonical ways to construct H from h.  The first, total or aggregate utilitarianism, is to just total up the happiness of everyone: H(w) = Sum(h(p)) for all p in w.  The second, average utilitarianism, is to average h(p) for every person p: H(w) = (Sum(h(p)) for all p in w)/(population of  w).

Defining Zero

I define zero utility to be the utility of anyone who does not feel anything--for instance, a dead person.  For a longer description of why, see the bottom of the post*.


Problems with Average Utility


There are a number of problems that come up, though, if you try to use average utilitarianism.  I'm going to start by giving a few thought experiments, and then talk about what  it is about average utilitarianism that leads to these conclusions.

The separate planets distinction

First, say that you have to choose between two possible worlds: one with 10,000 people with utility 2 and 100 with utility 3, and another with just the 100 people with utility 3.  An average utilitarian would have to choose option two, even though it just involves denying life to a bunch of people who would lead reasonably happy lives.  But perhaps in greater numbers you could try to defend this; the world would have only really happy people in the second scenario.

Alright, then.  Say that that there are two planets, planet A and planet B.  The two planets are separated by many lightyears and neither are ever going to interact.  Planet A has 10,000 people each with utility 2; Planet B has 100 people with  utility 3.  You're a being who is presented with the following option: do you blow up planet A?  Say that you're relatively sure that if you let the planets continue their utilities will remain as they are now.  You see the problem, now.  This is a lot like the first scenario, but here clearly planet A is better off around; it's a happy planet that's not hurting anyone else.  But if you're computing average utility of the universe it's decreasing it, and an average utilitarian would want to blow it up.

Ok, you say, what if I just treat them as two different universes, average each individually, and then make decisions separately for them?  The thing is, I can vary scenarios to be between the two listed above: maybe there is one planet with two countries, A and B, with the same populations as the  planets in the above scenario.  Then what you do?  How about different families that live near each other but won't ever really interact that much?

The happy hermit

Alright, now say we're back to earth.  You've done some careful studying and determined, reasonably, that the average human utility is about 1.3 utils.  You go hiking and discover, in the middle of a rocky crevice, a hermit living alone in a cabin; no one has visited him for 50 years.  You talk to him about, and he seems reasonably happy.  You use your magic Utility Box and find out that his utility is at 0.8 utils: positive, though not as happy as the average human.  He enjoys his life in the mountains.  You have a gun.  Do you kill him (assuming that  it wouldn't cause you psychological harm, etc.)?  An average utilitarian would.

___________________



Both of these thought experiments exploit the same flaw in average utilitarianism: it's not linear.  Here's what that means.  Say there's a universe U composed of two sub-universes u1 and u2.  H(U) does not equal H(u1)+H(u2).  What this means is that if you take a universe, its average utility will depend on how you split  it up when you're doing your math.  In the separate planets scenario, it mattered whether you considered the two planets, or countries, or families, together or separately.  In the hermit example, it mattered whether you considered the world as a whole or the rest of the world, and the hermit separately.

This is a rather fatal flaw for a philosophy to have; it shouldn't matter whether you consider non-interacting parts together or separately, and it also shouldn't matter exactly how much interaction it takes to be, you know, like, intertwined and stuff.

There's another way to look at the flaws with average utilitarianism, though: in average utilitarianism when you're considering the impact of someone on world utility, you look at not just how happy they are and how happy they make other people, but also how happy people are independently of them and would be whether or not the person in question existed, and how many other people there are.  In other words, as Adam once put it, in some sense the problem with average utilitarianism is that it's not total utilitarianism.

And so I am a total utilitarian.

_________________________________________________________________________________

*If you're an average utilitarian it doesn't matter if you offset all utilities by a given amount; it won't change comparisons between any two options: if (Sum(h(p)) for all p in w1)/(population of  w1) > (Sum(h(p)) for all p in w2)/(population of  w2), then (Sum(h(p)+k) for all p in w1)/(population of  w1) > (Sum(h(p)+k) for all p in w2)/(population of  w2).  If you're a total utilitarian, though, it does matter.  So, what's zero utility?  In other words, what's an example of someone whose utility is completely neutral; who feels neither happiness nor pain?  A dead  person.  (Or, perhaps, an unconscious person.)  This leads to the natural zero point for utility: h(p)=0 means that p is as happy as an unfeeling and/or dead person; as happy, in other words, as a rock.  This definition turns out to be quite necessary.  If you put the zero point anywhere else then you have to decide which dead people to include in your calculations; they're providing non-zero utility and so will affect the utility of various possible universes.  Alright, you say, I won't include any dead people.  Well how about people in vegetative states with no consciousness, happiness, or  pain?  How about fetuses before they've developed the ability to feel pain?  How about a fertilized egg?  How about an unfertilized one?  How about someone who was shot by a gun and is clearly going to die and has lost all brain function, but it's not clear at what point the doctor standing around him is going to pronounce him dead?  The point is that all of  these people clearly don't contribute to the total utility of the world, and so shouldn't influence calculations; furthermore, exactly how we decide when someone is "dead" or "basically dead" shouldn't influence it.  So it is necessary to define h(p_d)=0 for any unfeeling and/or dead person p_d.

Note, also, that neither average nor aggregate care about the units used for happiness; multiplying all utilities by a constant doesn't change anything.  So, I'll measure utility in units of utils, though I'll generally omit unit labels.

13 comments:

  1. Sam-- This seems to me a completely persuasive (indeed, devastating) critique of average utilitarianism. I eagerly await the post on the Repugnant Conclusion.

    ReplyDelete
    Replies
    1. Thanks--I think I'm going to write on Repugnant Conclusion, Utility Monster, etc. next.

      Delete
  2. Sam-- A further thought on further reflection about the domain of your argument.

    All of your examples involve situations in which members of the group that has higher utility will themselves lose nothing by letting others come into/remain in existence--in short, they presuppose that the world where the others exist is a strict Pareto improvement over a world where they do not. Opting for a strictly Pareto-inferior outcome makes the average utilitarian position look maximally indefensible-- i.e., what kind of moral idiot would you have to be to want to kill others/keep them from coming into being just for the sake of maximizing some abstract number (average utility), when it is no skin off your or anyone else's nose that they should live, and their lives will be reasonably happy? (I assume you are appealing to this intuition when you say "....even though it just involves denying life to a bunch of people who would lead reasonably happy lives.") I agree that this position is indefensible-- but so also would most nonconsequentialists, at least if they don't assign a value to equality in and of itself. So, first, let's change the hypothetical to one that poses an interpersonal tradeoff between the people with higher utility and the people with lower utility, to make the case tougher for total utilitarianism.

    Second, three of your four examples involve killing rather than not letting come into existence. Whether or not we should treat these two means to the same end differently obviously implicates the Repugnant Conclusion, and I don't want to address the issue here. Let's assume for the moment that the Sam is Great principle is right (how could it not be???) and Parfit is wrong. The problem is, nonsequentialists will object to killing, but because of the act/omission distinction and because of the case for average over total utilitarianism. So again, to make sure you are not getting an illicit argumentative boost from the independent wrongness of killing (from the nonconsequentialist point of view), let's strip out this feature as well.

    So, consider a variant on your hypo that eliminates the Pareto superiority of total utilitarianism and also killing as a means to boost average utility:

    A country in Latin America has a population of 1 million. The people are poor, but not destitute; let's say each has a utility of 1). It is choosing between two population growth policies. Policy A would opt for zero population growth, which is predicted to leave everyone with a utility of 1. Policy B would increase the population 10-fold, leaving each of the 10 million with a utility of 0.2. A total utilitarian will opt for policy B. This is brings us to Parfit's 'Sam is Not Great" objection to total utilitarianism (aka the 'Repugnant' Conclusion), and the case for persuading average utilitarians that Policy B is preferable and they are idiots if they don't think so boils down to the case against the Repugnant Conclusion.

    All of which is to say that I think you are absolutely right about the morally appealing outcome in each of your hypos above, but you've got the wind at your back here, and as a result have perhaps knocked out only the versions of average utilitarianism with the least appeal to everyone. So on to the Repugnant Conclusion...

    ReplyDelete
    Replies
    1. As for them all being situations where the happy don't lose anything--it's true that this is the harshest case for average, which is why I was using it because it's the case where average is most clearly wrong. The other case--where the happy do lost something--is just the repugnant conclusion, which I thought was best treated separately.

      As for killing vs. bringing into existence--I just used killing because it's easier to construct easy to parse sentences that way; I had forgotten that some people made a distinction. (It's going to be impossible to use that distinction in a logically consistent philosophy; maybe I'll write up why later, but long story short it's going to be history-dependent, among the other flaws that all the bad variants suffer from, like not being well defined because it relies on telling when exactly people are dead, a distinction between killing and saving lives that is going to be untenable, etc.).

      In response to the the rest of your comment--different scenarios notwithstanding I think the examples here clearly show that average utilitarianism at least sometimes totally and obviously breaks down. The repugnant conclusion doesn't change that, it's just an attempt to show that aggregate also breaks down in some spots (which I responded to in the other post).

      Delete
  3. 1. Killing v. not saving is the act/omission distinction. Contraception is not killing by omission. If a view requires treating contraception or other failure to reproduce as the equivalent of killing, because it could lead to the "same state of affairs" (whatever that means), that's a drawback.
    2. Suppose Philosophy A requires that we value and pursue consequence A, but has the result that we realize consequence B; and Philosophy B requires that we value and pursue consequence B, but has the result that we realize consequence A. Which is the better philosophy? (Or Philosophy B is nonconsequentialist, but widespread adherence yields more of consequence A than does widespread adherence to Philosophy A, etc.)
    3. There are some downside risks with the pursuit of aggregate utility. We might not be very good at predicting and controlling consequences. Large populations of slightly happy people might easily become very large populations of deeply miserable, desperate, starving homicidal people. If we think killing and letting die are both a lot worse than failing to bring into existence, we have reason to be very cautious about approaching the repugnant margin, because overshooting is a lot worse than undershooting. Another danger is that large population is unsustainable and there is a tipping point beyond which conditions for human life becomes unsustainable (and of course lower aggregate utility for infinite time is better than higher aggregate utility for finite time).
    4. The obvious response is that these dangers are properly factored into aggregate utility, which can therefore achieve conservation without adopting the mistaken premises of average utility. But a difficulty with both types of utility is the rest of us having to worry about being murdered by average utilitarians or immiserated and extinguished by aggregate utilitarians who miscalculate. We might all feel more secure and therefore happier if we just legislate some intuitively appealing rules and outlaw act utilitarianism.
    --Guyora

    ReplyDelete
    Replies
    1. 1. Yeah, I'll probably write something on the act/omission distinction. My basic thought is that it's pretty untenable--it's very quickly going to break down when you look at borderline cases, and could lead to some very unhappy people. It's also very hard to create a well-defined philosophy that includes it.

      2. Assuming that A is the desired outcome, then philosophy A is the correct one, but it might be in the interests of philosophy A to convince people to follow philosophy B.

      4. Same response, basically, as (2): it might be that the utilitarian thing to do is to write laws that don't appear explicitly utilitarian (though this isn't necessarily the case).

      Delete
  4. 1. I think you are probably right about the act/omission distinction, but if I understand what you've written you are going further than equating acts and omissions. You seem to be equating destruction with failing to create. If so, that's a much tougher argument.
    2. The point of my hypothetical philosophies is that, as I understand it, you have defined a philosophy as a preference-ordering, which pretty much excludes by definition deontology, virtue ethics, rule utilitarianism and pragmatism, as well as some plausible interpretations of utilitarianism itself. It might be the case that the best philosophy is a preference ordering, but shouldn't you have to argue for that?
    4. One problem with an interpretation of utilitarianism as requiring that we fool other people into holding false beliefs is that it is at odds with Bentham's own views which required a transparent state and mistrusted officials to pursue the public interest otherwise. Honesty, publicity, discursive clarity and democratic accountability are epistemological requisites for determining what utility is and what it requires. We can't have Sam deciding for us what will maximize public utility because that's not a suitable way of identifying it. Sam's job is to determine private utility, his own. Our job is to set the incentives for Sam's pursuit of his utility to conduce to public utility. Of course on this (correct, in my view) interpretation of utilitarianism it's not really what contemporary philosophers mean by a moral philosophy. It's more like a practice of policy analysis.
    --Guyora

    ReplyDelete
    Replies
    1. 1. You're correct, I am making both points. It seems intuitively pretty obvious to me, but it's becoming clear that it's not so for most people, so I think I'm going to post something more fleshed out on act/omission, creation/destruction, etc. soon.

      2. Yup, I'm defining a philosophy as a preference-ordering. If not a preference ordering, though, what is a philosophy? This, I think, is one of those points where it's hard for me to refute every possible definition simultaneously, but if you have another definition for a philosophy I'd be happy to hear it and respond.

      4. What I'd say in response to this is that in general utilitarianism does promote transparency, etc. It's true of course that you can come up with convoluted examples where it doesn't, but those are outside the norm. Bentham did not think that transparency was the building block of morality, just generally a consequence of it. He also saw many other things as generally consequences of utilitarianism, e.g. anti-racism, anti-sexism, more humane jail system, etc., even though these were not the fundamental tenants of his philosophy. So, basically, it's true that utilitarianism occasionally incentivizes lying but in practice relatively rarely, and it's a good thing that it does so occasionally--any philosophy that never incentivizes lying is going to have some serious issues with e.g. Nazi inquisitions about hiding Jews in WWII.

      Delete
  5. 2. A philosophy might be an appealing resolution to an intractable disagreement (Hobbes Leviathan or Rawls' veil of ignorance as a way of reconciling competing visions of the good), or a method of justification (fallibilism) worked out in detail and illustrated by application. One way to define the concept is empirically, by seeing what traits different philosophies have.

    What you have in mind, I think, is something smaller than a philosophy: an interpretation of utility as a criterion for rational choice. But the importance of choosing among these interpretation of rationality depends on some prior philosophical choices.

    4. Just so we understand each other, my claim is not that transparency is contingently conducive to utility conceived as the greatest good of the greatest number as perceived by a benevolent person trying to decide what to do. My claim is that transparency is constitutive of utility in the settings for which utilitarianism was designed as the solution to a philosophical problem.

    The philosophical problem was the one exercising lots of 18th century reformist minds: how to make democracy virtuous. The answer was a set of procedures designed to cure epistemic and incentive problems. It was philosophically important to develop a transparent, public, shareable, consensus measure of utility to settle disagreement among people with competing views and interests. It was not philosophically important to develop an answer to the question of what individuals should do, because individuals are going to do what they want, and we don't have much chance of making the world better just by exhorting people to be good. So Bentham was just not very interested in providing pastoral advice on how individuals should manage their moral obligations. He was interested in designing processes for collective choice. Utilitarianism was valuable as a set of ground rules for public debate, as a focus for social research and a method of policy analysis.

    Of course anyone is entitled to work out the implications of act utilitarian ethics today. But the fact that that project didn't much interest Bentham should make modern moral philosophers wonder if they've correctly grasped the argumentative structure of utilitarianism.
    --Guyora

    ReplyDelete
    Replies
    1. 2. One could try to find commonalities between philosophies, but if one did I suspect they'd find that the single most common thread is an attempt to create a belief system using lots of lofty sounding principles that justifies the way the creator is living their life--something slightly less useful, and noble, than I have in mind when I think of what a philosophy should be.

      Perhaps this is just a matter of definitions, but if I understand what you said correctly than my interpretation of utilitarianism includes the prior philosophical choices.

      4. Re: Bentham: My sense is that, as you said, Bentham focused mainly on legislative aspects of utilitarianism because those are where he could be most effective; it is much harder to get people to change their behavior than to write a law forcing them to. But that doesn't mean that the individual decisions are irrelevant; in fact if he didn't care about individuals' decisions there would be no point in making the law in the first place (and as I argue here (http://measuringshadowsblog.blogspot.com/2012/07/utilitarianism-part-3-classical-act-one.html), rule utilitarianism is a pretty silly concept.) Anyway, in the end I describe myself as a Benthamite because it's a useful shorthand and because I believe (and I think most people do) that he was a utilitarian in the same mold I am; but fundamentally I'm an act, total, hedonistic utilitarian whatever you want to call Bentham.

      Delete
  6. OK.
    Let private utility=the greatest good of an individual decisionmaker.
    Let public utility=the greatest good of a collective decisionmaker.
    Let rational choice=that decision maximizing utility for the decisionmaker.
    It follows that a collectivity rationally exercises its power by deploying it to maximize public utility. But this does not necessarily entail encouraging or incentivizing individuals to do whatever they think will maximize public utility.
    That maximizing public utility is the best answer to the question "what is most rational for us to do collectively," does not entail that maximizing public utility is the best answer to the question "what should I do." Hence, act utilitarian ethics does not follow from utilitarian policy analysis, even though utilitarian policy does indeed try to influence individuals' decisions.
    --Guyora
    PS: If this interpretation of Bentham interests you further, it is set out in my article, Guyora Binder & Nicholas J. Smith, "Framed: Utilitarianism and Punishment of the Innocent" 32 Rutgers L.J. 115 (2001).

    ReplyDelete
    Replies
    1. If I understand what you're saying correctly, your point is that different opinions could lead to this. It's true that a collectivity rationally exercising its power would not entail it incentivizing individuals to do what the individuals think would maximize public utility--but it does mean that the collectivity should incentive individuals to do what the collectivity thinks would maximize public utility (taking into account whatever expertise the individuals have that might make deferring slightly to their judgement optimal); what's a play here is not a split between act and rule utilitarianism but a possible difference of judgement calls made by different people and bodies--the collectivity will disagree with some individuals about what would maximize public utility. But this is just like saying that I don't want you to do what you think will maximize public utility; I want you do do what I think will maximize public utility, because I probably think there are some times when I'm right and you're wrong and probably think those times more frequent than the opposite or else I would do good to change all of my opinions to yours.

      Delete
  7. "there's one definition I'd like to make: a philosophy is an algorithm that orders all possible universes from best to worst; the ordering has to be transitive, reflexive, and any two universes have to be comparable."

    I understand what you are doing here, but really don't like your choice of 'philosophy' here. 'Philosophy' is a gigantic field. In particular, there is a big distinction between 'orders all universes' and 'requests that individuals do specific things'. For instance, one could imagine philosophical systems that admit to utilitarian universe orderings, but consider that individuals have absolutely no responsibility to act on this knowledge.

    I spent some time trying to understand these distinctions recently. I was pointed in the direction of Value Theory.
    "In its narrowest sense, “value theory” is used for a relatively narrow area of normative ethical theory particularly, but not exclusively, of concern to consequentialists. In this narrow sense, “value theory” is roughly synonymous with “axiology”. Axiology can be thought of as primarily concerned with classifying what things are good, and how good they are. For instance, a traditional question of axiology concerns whether the objects of value are subjective psychological states, or objective states of the world."
    http://plato.stanford.edu/entries/value-theory/

    That said, I'm not exactly sure what to call one specific definition of value (a ranking of universes). It seems like it would be interesting to categorize the space of all definitions of value, and then select the largest possible set of definitions for any given argument (this argument is relevant for the set of X definitions of value).

    ReplyDelete

Contributors