Thursday, July 26, 2012

Utilitarianism, part 5: Who Counts?

This is the fifth post in a series about utilitarianism.  For an introduction, see the first post.  For a look at total vs. average utilitarianism, see here.  For a discussion of act vs. rule, hedonistic vs two level, and classical vs. negative utilitarianism, see here.  For a response to the utility monster and repugnant conclusion, see here.

_______________________


Another question that sometimes gets asked about utilitarianism is: who, exactly, is included in the calculation of world utility when evaluating possible scenarios?  People alive now?  People who would be alive then?  People who will live no matter what?  This is a question which is also often bundled into total vs. average utilitarianism, but I've already written on that here, and, I hope, demonstrated that average utilitarianism doesn't work.  I've also attempted to refute the "repugnant conclusion", the most commonly leveled criticism against total utilitarianism, with the "Sam is Great" rule here.  So I will assume that no matter what group of  people counts, the correct thing to do with them is to total up their utility in order to evaluate a scenario.

The first thing to note is that total utilitarianism naturally handles the somewhat unpleasant question  of whether dead people matter, and if so at exactly what point someone is "dead": dead people have zero utility and so won't affect the total utility of the world anyway.  You should include them in the calculation the whole time, but it will stop mattering once their dead enough to not feel any pain or pleasure.


Not Yet Living


The most common form of not counting some people in utilitarian calculations is generally called prior-existance utilitarianism, which states than when calculating the utility of possible universes you should only include people who are already alive, i.e. not people who will be born sometime between the current time and the time that you are analyzing.  There are a number of variants on this, but the central idea is the same: there are some people who shouldn't yet count for evaluating the utility of future scenarios.

This idea, however, has two damning flaws.  To understand the first flaw, look at the following scenarios.

The Future Pain Sink


There are currently ten people in the world W, each with utility 1 (where 0 is the utility of the dead).  You are one of the ten, and trying to decide whether to press a button which would have the following effect: a person would be born with -10 utility for the rest of their life, which would last for 100 years, and you would gain 1 happiness for a year.  (If you like, you can imagine a much more realistic version of "pressing the button" which results in short-term pleasure gain in return for an increase in unhappy population, but for the sake of reducing complications I will stick to the button model.)  Do you press the button?  Well, if you are a prior-existence utilitarianism, you would do it: you don't care about the not-yet-existing person's utility.  You would prefer the button-pressed universe to the non-button-pressed one, because the ten existing people would be net happier, and so you would press the button.  But then the new person comes into being, and you value its utility.  And now you prefer the universe where you hadn't pressed the button to the one where you had, even though you haven't yet gotten any of the positive effects of the button press.  You disagree with your past self, even though nothing unexpected has happened.  Everything went exactly as planned. In fact, your past self could have predicted that your future self would, rationally, disagree with your current self on whether to press the button.


The Once King


Now, say that you're the king of an empire, the only empire on earth.  You could manage your grain resources well, saving up enough for future years, encourage farming practices that preserve the soil, and reduce carbon emissions from anachronistic factories--in short, you could make some small short term sacrifices in order to let your kingdom flourish in fifty years.  But the year is 1000, and so no currently living people are going to be able to survive that long except yourself (being a king, you have access to the best doctors of the age).  And so, even though you're a utilitarian, you don't plan for the long term because it'll help people not yet born.  But then fifty years pass and your kingdom is falling into ruin--and all of your subjects are suffering.  And so you curse your past self for having been so insensitive to your current universe.


The problem with both of these scenarios, and in general with prior-existence utilitarianism, is that your utility function is essentially changing over time: its domain is the set of living people, a set which is constantly changing.  And so your utility function will disagree with its past and future selves; it will not be consistent over time.  This will give some really weird results, like someone repeatedly pressing a button, then un-pressing it, then pressing it again, etc., as your utility function goes back and forth between including a person and not including them.  Any morality had better be constant in time, or it's going to behave really weirdly.



The second flaw is much simpler: why don't you care about future generations' happiness?  Why did you think this would be a good philosophy in the first place?  Why would you be totally insensitive to some people's utilities because of when they're born?  Their happiness and their pain will be just as real as the current peoples', and to ignore it would be incredibly short-sighted, and a little bit bizarre, like a bad approximation to attempting to weigh your friends' utilities more than strangers'.


Friends and Family (and Self)


Speaking of which, the other common form of valuing people differently comes in valuing people close to you more (call it self-preferential utilitarianism).  So, for instance, you might weight your own happiness with an extra factor of 1,000, your immediate family's with a factor of 100, and your friends with a factor of 10, or something like that.

Let me first say that there are of course good practical reasons to generally worry more about your own utility, and that of close friends and family, than that of strangers: it's often a lot easier to influence your own utility than someone you've never heard of before; you can easily have significant control on your own life and the lives of those close to you; and maintaining close relationships (and living a reasonably well off life, at least by global standards) can help to prevent burnout.  But this is all already built into normal utilitarianism; to the extent that the utilitarian thing to do is to make people happy by talking to them and hanging out with them it's naturally going to be the case that this is best done with friends because you already have an established connection with them, you know that you'll get along with them, and it'd be difficult to find a random person and have an interesting conversation with them.  Once again, it's important to make sure not to double count intuitions by explicitly adding a term for something that is already implicitly taken care of.

Even if you are undeterred by this and want to weigh friends, family and self more than strangers, as a philosophy this is going to run into more problems.  First, to the extent that your friend base changes over time you could run into the same problems as prior-existence utilitarianism has with non-constant utility functions.  Second, it has the weird property that two people will disagree about what the right thing to do is even if they have the same information and see the same options for how the universe should be.  Third, as a consequence of this everyone being an optimal friend-preference utilitarian would, in many circumstances, be dominated by everyone being normal utilitarians.  The easiest example of that is the prisoner's dilemma.  Say there are two people in the world, A and B; each have a button they could press that would make them 1 util happier but cost the other 2 utils.  If both are self-preferential utilitarians they would both push the button, leaving both worse off than if they had both acted as normal utilitarians--even by self-preferential utilitarian metrics.  That is to say, everyone following self-preferential utilitarianism does not always lead to the optimal self-preferential outcome for everyone, and can in fact be dominated by other strategies that everyone could follow.  Now that's not a problem with a way to describe people's motivations behind their actions, but it seems a bit weird to endorse a philosophy that produces sub-optimal results by its own measure.  Fourth, there comes the sticky question of exactly what weights each person gets, and how that's decided.


All of these problems, in the end, come from the fact that self-preferential utilitarianism took utilitarianism and added an arbitrary, hard to define, non-universal, non-constant in time wrench into it.  This is the downfall of many flavors of utilitarianism; I've made similar points about average utilitarianism, negative utilitarianism, and high/low pleasure utilitarianism.  Like with average utilitarianism, in the end in some sense the problem with self-preferential utilitarianism is that it's not normal utilitarianism.


The astute observer will note that there is another divide in utilitarianism that would fit well under this title.  But it deserves quite a bit more space than this, and so it will have to wait until a later day.


2 comments:

  1. Another good post.

    One comment:

    """"The easiest example of that is the prisoner's dilemma. Say there are two people in the world, A and B; each have a button they could press that would make them 1 util happier but cost the other 2 utils. If both are self-preference utilitarians they would both push the button, leaving both worse off than if they had both acted as normal utilitarians--even by self-preference utilitarian metrics."""""

    This can be converted into a standard Prisoner's Dilemma with different-than-standard payoff ratios (+0 for mutual cooperation, -1 each for mutual defection, +2 for the winner / -1 for the sucker in cooperate/defect), and can be solved even by self-interested people if they were superrational.

    My theory is still that from a pure narrow self-interest point of view, utilitarianism is bad for us. (Though I leave it an open problem whether utilitarianism is good for us from a long/enlightened self-interest POV). This just doesn't matter much to me, because I don't care about my self-interest all the time.

    ReplyDelete
    Replies
    1. Thanks!

      The thing with that version of the prisoner's dilemma is that it's not clear that a utilitarian defects; in fact he optimal scenario is one cooperates and one defects, and so a utilitarian would sometimes defect in order to maximize utility. (I'm assuming there isn't a typo; if you mean +1/-2 for the cooperate/defect case, then it's different.)

      And yeah, I agree that self-interested utilitarianism will make you happier than neutral utilitarianism, but by definition neutral utilitarianism would make people in general happier than self-interest utilitarianism (assuming you're properly counting all beings, even those that don't yet exist).

      Delete

Contributors