___________________________________
In this post I'm going to address a number of divisions in utilitarianism that don't require quite so much space as total vs. average utilitarianism.
High/low pleasure vs. Hedonistic utilitarianism
High/low pleasure or two-level utilitarianism is the thought that some forms of pleasure are intrinsically better than others, independent of their function for society. Hedonistic utilitarianism rejects this claim.
But why invent this distinction in the first place? In general adding new rules to a philosophy should be looked upon critically: they often just serve as ways for people to save themselves from the consequences of their beliefs, and this is a perfect example of it. I cannot help but note that in high/low pleasure utilitarianism, the "higher" form of pleasure--the more important one--is intellectual pleasure, and that it is a philosophy invented by intellectuals. It would be like if I created "Starcraft utilitarianism", which is like normal utilitarianism but pleasure gained playing the Starcraft II computer game is weighted more heavily--a relatively transparent attempt to justify my current activities as worthwhile. This is not to say, of course, that intellectuals are bad for the world--much of society's advancement is due to them--but they should have to justify their work and lifestyle on their own merits, not by inventing philosophies that arbitrarily favor what they do with their life.
This is all, of course, putting aside the issue of defining what exactly an intellectual pursuit is.
I am a hedonistic utilitarian.
Act vs. Rule utilitarianism
Act utilitarianism is what you usually think of as utilitarianism: the philosophy that you should try to maximize utility. Rule utilitarianism, roughly, states that when evaluating a rule--for instance, a possible law, or maybe a standard like "bike to work instead of driving" (depending on the particular interpretation of rule utilitarianism), you should evaluate the rule on utilitarian grounds.. It suffers from two quite obvious flaws. First, what exactly is a rule? If it's defined as a possible algorithm to follow, then it just reduces to act utilitarianism (e.g. take the set of "rules" designed to produce actions corresponding to the set of possible universes). Otherwise it's going to be impossible to define rigorously, or even semi-rigorously. Second, why would you think that utilitarianism would be good for evaluating laws, but not everyday decisions?
This is not to say, of course, that whenever you have to make a decision you should get out a pen and paper and start adding up the utility of everyone in the world. In many everyday situations the act utilitarian thing to do is to create a (not fully well defined) rule and follow it in unimportant situations to save your brain power for more important things. When I try to decide how to get to campus I don't spend time calculating the marginal effect that my driving would have on traffic for other drivers; I assume it'd probably make it worse, and that that'd probably decrease their utility. I also don't calculate the effect on global warming, etc: instead I understand those are all things that incentivize biking over driving, and so as a general matter just bike to campus and spend my time and brain power thinking about much more pressing issues that the world needs to confront, like Tim Lincecum's ERA. Similarly I would generally be in favor of laws that incentivize biking over driving so as to discourage defections on prisoner's dilemmas, so long as the laws were well designed. But this is not an argument for rule utilitarianism as a philosophy, just that sometimes it would argue for similar things as act utilitarianism.
For all of those reasons, I am an act utilitarian.
Classical utilitarianism vs. Negative utilitarianism
The terminology is a bit confused here, so to clarify by this I mean: what is the relative weighting of pain and pleasure? Classical utilitarianism does not tend to make much of a distinction between the two (or, if you wish, weights them equally), whereas negative utilitarianism weights pain more heavily: sometimes just by an extra factor (i.e. h(p) = Happiness(p) - k*Pain(p) for some k > 1), and sometimes infinitely more, i.e. reducing suffering is always more important than increasing happiness. Still others try to split the difference between these two types of negative utilitarianism and weight all pain above some threshold infinitely, while pain below it equally the pleasure.
The strictest form of negative utilitarianism is clearly silly; it implies that if anyone anywhere is suffering, than no pleasure for anyone anywhere matters; the threshold version also has this problem to the extent that the threshold is crossed. In addition neither of these are likely to be linear.
So, I'll just look at the weakest version: that h(p), the happiness of a person, is Happiness(p) - k*Pain(p) for some constant k > 1. There are a number of problems with this:
1) Are happiness and pain calculated separately for each person, or first added together? I.e.: say p, at a given point, is feeling 10 utils of happiness and 20 utlils of pain, and say k=2. Should you be calculating h(p = 10-20*2=-30, or h(p) = h(10-20) = k*(-10)=-20? The problem with the first version is that it is not going to be linear in how you view an individual's emotions: if you consider all emotions about an upcoming interview as one set of emotions, and everything else as another set, you'll get a different answer than if you consider everything about their hunger as one set of emotions and everything else as another. So, let's say that you're going with the second version: then what you're really saying is that you care more about net unhappy people than net happy ones.
2) k is totally arbitrary; why is pain twice as bad as pleasure is good? Why not 10 times? Why not 1.001 times? For that matter, why not 0.7 times?
3) Why do you care about pain more than pleasure in the first place? The reason, I think, is that you're grabbing on to the following fact about people: it's a lot easier to be really really unhappy than it is to be really really happy; there is just about no experience in the world as good as being tortured is bad. But that fact will already be reflected in classical utilitarianism; naturally someone who is being tortured will have much more negative utility than someone who won the lottery will have positive utility because that's how humans work, and so the correct actions will care about them more; introducing k is just double counting for that.
For all of these reasons, I am a classical utilitarian.
A point vaguely tangential to your second paragraph on act vs. rule: Often one just kind of assumes that people have the chance to think about their choices before doing things, but in reality that's not exactly true. What are your thoughts on the tradeoff between spending time thinking in order to reach a better decision, and relying on instincts or rules to simplify daily life and free up brain power?
ReplyDeleteThat's a good question. My thoughts, basically, are that most people should be thinking a hell of a lot more than they do. I say this because I'm a rather contemplative guy but it's only recently that I've started realizing how many "rules" I've taken for granted but shouldn't have. Until a year ago I hand't really thought seriously about what I wanted to do with my life; I was just doing what was natural for my demographic. I never considered the consequences of what I eat, never really thought about what the future of the world should be, never really thought about how I should spend my time, never even really thought about what makes me happy--all really important questions that even a little bit of thought on makes a huge difference. It takes a bit of work to get into the habit of questioning yourself and your actions more, but I think it's something almost everyone--certainly including myself--could benefit a lot from.
DeleteAgainst ultilitarianism: http://thewrongmonkey.blogspot.com/2013/02/against-utilitarianism.html
ReplyDelete