This is my fourth in a series of posts on
utilitarianism. The
first is an introduction. The
second is a post on average vs. total utilitarianism. The
third is a post on act vs. rule, classical vs. negative, and hedonistic vs. high/low pleasure utilitarianism.
____________________________________
This post is a response to various objections people often claim to utilitarianism.
The Repugnant Conclusion
The Repugnant Conclusion (or mere addition pradox) is a thought experiment designed by
Derek Parfit, mean as a critique of
total utilitarianism. The thought experiment goes roughly as such: Suppose that there are 1,000 people on Earth, each with happiness 2. Well, a total utilitarian would prefer that there be 10,000 people each with happiness 1, or even better 100,000 people each with happiness 0.5, etc., leading eventually to lots and lots and lots of people each with almost no utility: in fact, for any arbitrarily small (but positive) utility U, there is a number of people N such that one would prefer N people at U utility to a given situation. This, says Parfit, is repugnant.
I would argue, however, that this potential earth--with e.g 1,000,000,000 people, each with 0.015 utils of happiness, is far from a dystopia. First of all, it is important to realize that this conclusion only holds as long as U--the happiness per person--remains
positive; when U becomes negative, adding more people just decreases total utility. So, when imagining this potential planet it's important not to think of trillions of people being tortured; instead it's trillions of people living marginally good lives--lives worth living.
Still many people have the intuition that 1,000,000,000,000, people at happiness 2 (Option A) is better than 1,000,000,000,000,000 people with happiness 1 (Option B). But I posit that this comes not from a flaw in total utilitarianism, but instead from a flaw in human intuition. You remember those really big numbers with lots of zeros that I listed earlier in this paragraph? What were they? Many of you probably didn't even bother counting the zeros, instead just registering them as "a really big number" and "another really big number, which I guess kind of has to be bigger than the first really big number for Sam's point to make sense, so it probably is." In fact, English doesn't even really have a good word for the second number ("quadrillion" sounds like the kind of think a ten year old would invent to impress a friend). The point--a point that has been supported by research--is that humans don't fundamentally understand numbers above about four. If I show you two dots you know there are two; you know there are exactly two, and that that's twice one. If I show you thirteen dots, you have to count them.
And so when presented with Options A and B from above, people really read them as (A): some big number of people with happiness 2, and (B): another really big number of people with happiness 1. We don't really know how to handle the big numbers--a quadrillion is just another big number, kind of like ten thousand, or eighteen. And so we mentally skip over them. But 2, and 1: those we understand, and we understand that 2 is twice as big as one, and that if you're offered the choice between 2 and 1, 2 is better. And so we're inclined to prefer Option A: because we fundamentally don't understand the fact that in option B, one thousand times as many people are given the chance to live. Those are entire families, societies, countries, that only get the chance to exist if you pick option B; and by construction of the thought experiment, they want to exist, and will have meaningful existences, even if they're not as meaningful on a per-capita basis as in option A.
Fundamentally, even though the human mind is really bad at understanding it, 1,000,000,000,000,000 is a lot bigger than 1,000,000,000,000; in fact the difference dwarfs the difference between things we do understand, like the winning percentages of the Yankees versus that of the Royals, or the numbers 2 and 1. And who are we to deny existence to those 900,000,000,000,000 people because we're too lazy to count the zeros?
I have one more quibble with Parfit's presentation of the thought experiment: the name. Naming a thought experiment "The Repugnant Conclusion" is kind of like naming a bill "The Patriot Act" so that you can call anyone who votes against it unpatriotic. I'm all in favor of being candid about your thoughts, but do so in analysis and discussion, not in naming, because a name is something that everyone is obliged to agree with you on.
By the way, I'm naming the above thought experiment the "if-you-disagree-with-Sam-Bankman-Fried-on-this-then-you-probably-have-a-crush-on-Lord-Voldemort conclusion", or "Sam Is Great" for short, and would appreciate it if it were referred to as such.
The Utility Monster
The Utility Monster is the other of the two famous anti-utilitarianism thought experiments. There are a few different version of it running around. All of the versions revolve around a hypothetical creature known as the Utility Monster. In some versions it gains immense amounts of pleasure from torturing people--more pleasure than the pain they feel--and in others it simply gains more pleasure than others from consuming resources, e.g. food, energy, etc., and so the utilitarian solution would be to allow it all the resources, while the rest of humanity either withers off or continues a greatly diminished existence.
There's something that seems intuitively wrong about giving all societal resources to one utility monster, but what's going on here is really the same thing as the
intuition behind negative utilitarianism: because it's so much easier to make someone feel really bad (e.g. torture them) than really good, no normal person would come close to being able to gain from consuming the world's resources anything close to the losses associated with seven billion tortured people. In fact if you took a random person and gave them unlimited resources, it's unlikely they'd be able to make up for a single tortured person (especially given the
lack of basically any happiness gained from income above about $75,000). In order to make up an additional factor of seven billion, the utility monster in question would have to be a creature that behaves fundamentally differently from any creature we've ever encountered. In other words, in order for utilitarianism to disagree with out intuitions and endorse a utility monster, the situation would have to be way outside the set of situations we have ever encountered.
And, in fact, it would be a little bit weird if the optimal decisions
didn't disagree with our moral intuitions in weird-ass situations, because our intuitions are not meant to deal with them. This is a phenomenon frequently seen in physics: when you get to extreme situations outside of the size/speed ranges humans generally interact with, our intuitions are wrong. It seems really weird that if you travel the speed of light you don't age relative to the rest of the universe, but that's because our intuitions were developed for 10 mile per hour situations, not for the speed of light. It seems really weird that objects
do not have well defined positions or velocities but are instead
complex-valued probability distributions floating through space, but that's because our intuitions weren't developed to deal with electrons. It seems really weird that snapping a piece of plutonium into smaller pieces can destroy a city, but it can.
Long story short, utility monsters are generally really bad bargains even for a utilitarian, and trying to work around it is just double-counting this effect (like
negative utilitarianism double counts our intuition that it's harder to be really happy than really sad). And when a utility monster really is the utilitarian option, you're in a really bizarre situation that we shouldn't expect out intuitions to work in any way.
I Don Care About Morality
One of the more common responses I get to utilitarianism is some combination of "well screw morality", "you can't define happiness", and "the universe has no preferred morality". And all of these are, in a sense, true: in the end it's probably not possible to truly define happiness, and the universe does not have a preferred morality. And all of these things are fine things to use to reject utilitarianism, as long as you wouldn't have any objections to any possible universes, including ones in which you and/or other people are being tortured en mass for no good reason. But as soon as you say that it's "wrong" to steal or torture or murder, you've accepted morality; as soon as you say that it's bad to be racist or sexist or you have political positions you've accepted morality. You can't have it both ways.
Please, sir, can I have some more?
Perhaps the most common response I get to utilitarianism, however, is a combination of two statements. The first, roughly speaking, is "then why aren't you just high all the time?"; the second, roughly, is "then why aren't you in Africa helping out poor kids?" Yes, these two statements are contradictory; and yes, I often hear them together, at the same time, from the same people. I'm going to ignore the first statement because it's just wrong, and instead focus on the second.
Imagine that you lived in a universe where the marginal utility of your 5th hour of time after work was greater volunteering at a soup kitchen than hanging out with friends. (It's probably not too big a stretch of your imagination.) Well, it's probably also true of the fourth hour after work. And the third... At some point you might start to loose sanity from working/volunteering to much and/or your productivity might significantly decline, but until that point it seems that utilitarianism says that if you've decided that some of your time--or money--can be better spent on others than on yourself, well, then, why not more of it? Why not all of it?
I'm going to write much more on this later, but for now I have two points. The first is that this, truly, is why people aren't utilitarians: in the end what scares people most about utilitarianism is that it encourages selflessness. And the second point is that it would be really weird if a philosophy held that selfishness was good for the wold. Yes, of course dedicating some of your life to making the world a better place is good, and of course donating more is better. Defecting on prisoner's dilemmas is bad, and cooperating is good.
I don't, of course, believe that everyone does or ever really will act totally selflessly and totally in the interest of the world, but to argue that they shouldn't is sacrificing an almost tautologically true statement in an effort to reclaim the possibility that you're acting "well enough".