Friday, July 20, 2012

The Utilitarian Boogeymen

This is my fourth in a series of posts on utilitarianism.  The first is an introduction.  The second is a post on average vs. total utilitarianism.  The third is a post on act vs. rule, classical vs. negative, and hedonistic vs. high/low pleasure utilitarianism.

____________________________________



This post is a response to various objections people often claim to utilitarianism.


The Repugnant Conclusion


The Repugnant Conclusion (or mere addition pradox) is a thought experiment designed by Derek Parfit, mean as a critique of total utilitarianism.  The thought experiment goes roughly as such:  Suppose that there are 1,000 people on Earth, each with happiness 2.  Well, a total utilitarian would prefer that there be 10,000 people each with happiness 1, or even better 100,000 people each with happiness 0.5, etc., leading eventually to lots and lots and lots of people each with almost no utility: in fact, for any arbitrarily small (but positive) utility U, there is a number of people N such that one would prefer N people at U utility to a given situation.  This, says Parfit, is repugnant.

I would argue, however, that this potential earth--with e.g 1,000,000,000 people, each with 0.015 utils of happiness, is far from a dystopia.  First of all, it is important to realize that this conclusion only holds as long as U--the happiness per person--remains positive; when U becomes negative, adding more people just decreases total utility.  So, when imagining this potential planet it's important not to think of trillions of people being tortured; instead it's trillions of people living marginally good lives--lives worth living.

Still many people have the intuition that 1,000,000,000,000, people at happiness 2 (Option A) is better than 1,000,000,000,000,000 people with  happiness 1 (Option B).  But I posit that this comes not from a flaw in total utilitarianism, but instead from a flaw in human intuition.  You remember those really big numbers with lots of zeros that I listed earlier in this paragraph?  What were they?  Many of you probably didn't even bother counting the zeros, instead just registering them as "a really big number" and "another really big number, which I guess kind of has to be bigger than the first really big number for Sam's point to make sense, so it probably is."  In fact, English doesn't even really have a good word for the second number ("quadrillion" sounds like the kind of think a ten year old would invent to impress a friend).  The point--a point that has been supported by research--is that humans don't fundamentally understand numbers above about four.  If I show you two dots you know there are two; you know there are exactly two, and that that's twice one.  If I show you thirteen dots, you have to count them.

And so when presented with Options A and B from above, people really read them as (A): some big number of people with happiness 2, and (B): another really big number of people with  happiness 1.  We don't really know how to handle the big numbers--a quadrillion is just another big number, kind of like ten thousand, or eighteen.  And so we mentally skip over them.  But 2, and 1: those we understand, and we understand that 2 is twice as big as one, and that if you're offered the choice between 2 and 1, 2 is better.  And so we're inclined to prefer Option A: because we fundamentally don't understand the fact that in option B, one thousand times as many people are given the chance to live.  Those are entire families, societies, countries, that only get the chance to exist if you pick option B; and by construction of the thought experiment, they want to exist, and will have meaningful existences, even if they're not as meaningful on a per-capita basis as in option A.

Fundamentally, even though the human mind is really bad at understanding it, 1,000,000,000,000,000 is a lot bigger than 1,000,000,000,000; in fact the difference dwarfs the difference between things we do understand, like the winning percentages of the Yankees versus that of the Royals, or the numbers 2 and 1.  And who are we to deny existence to those 900,000,000,000,000 people because we're too lazy to count the zeros?

I have one more quibble with Parfit's presentation of the thought experiment: the name.  Naming a thought experiment "The Repugnant Conclusion" is kind of like naming a bill "The Patriot Act" so that you can call anyone who votes against it unpatriotic.  I'm all in favor of being candid about your thoughts, but do so in analysis and discussion, not in naming, because a name is something that everyone is obliged to agree with you on.

By the way, I'm naming the above thought experiment the "if-you-disagree-with-Sam-Bankman-Fried-on-this-then-you-probably-have-a-crush-on-Lord-Voldemort conclusion", or "Sam Is Great" for short, and would appreciate it if it were referred to as such.


The Utility Monster



The Utility Monster is the other of the two famous anti-utilitarianism thought experiments.  There are a few different version of it running around.  All of the versions revolve around a hypothetical creature known as the Utility Monster.  In some versions it gains immense amounts of pleasure from torturing people--more pleasure than the pain they feel--and in others it simply gains more pleasure than others from consuming resources, e.g. food, energy, etc., and so the utilitarian solution would be to allow it all the resources, while the rest of humanity either withers off or continues a greatly diminished existence.

There's something that seems intuitively wrong about giving all societal resources to one utility monster, but what's going on here is really the same thing as the intuition behind negative utilitarianism:  because it's so much easier to make someone feel really bad (e.g. torture them) than really good, no normal person would come close to being able to gain from consuming the world's resources anything close to the losses associated with seven billion tortured people.  In fact if you took a random person and gave them unlimited resources, it's unlikely they'd be able to make up for a single tortured person (especially given the lack of basically any happiness gained from income above about $75,000).  In order to make up an additional factor of seven billion, the utility monster in question would have to be a creature that behaves fundamentally differently from any creature we've ever encountered.  In other words, in order for utilitarianism to disagree with out intuitions and endorse a utility monster, the situation would have to be way outside the set of situations we have ever encountered.

And, in fact, it would be a little bit weird if the optimal decisions didn't disagree with our moral intuitions in weird-ass situations, because our intuitions are not meant to deal with them.  This is a phenomenon frequently seen in physics: when you get to extreme situations outside of the size/speed ranges humans generally interact with, our intuitions are wrong.  It seems really weird that if you travel the speed of light you don't age relative to the rest of the universe, but that's because our intuitions were developed for 10 mile per hour situations, not for the speed of light.  It seems really weird that objects do not have well defined positions or velocities but are instead complex-valued probability distributions floating through space, but that's because our intuitions weren't developed to deal with electrons.  It seems really weird that snapping a piece of plutonium into smaller pieces can destroy a city, but it can.

Long story short, utility monsters are generally really bad bargains even for a utilitarian, and trying to work around it is just double-counting this effect (like negative utilitarianism double counts our intuition that it's harder to be really happy than really sad).  And when a utility monster really is the utilitarian option, you're in a really bizarre situation that we shouldn't expect out intuitions to work in any way.



I Don Care About Morality



One of the more common responses I get to utilitarianism is some combination of "well screw morality", "you can't define happiness", and "the universe has no preferred morality".  And all of these are, in a sense, true: in the end it's probably not possible to truly define happiness, and the universe does not have a preferred morality.  And all of these things are fine things to use to reject utilitarianism, as long as you wouldn't have any objections to any possible universes, including ones in which you and/or other people are being tortured en mass for no good reason.  But as soon as you say that it's "wrong" to steal or torture or murder, you've accepted morality; as soon as you say that it's bad to be racist or sexist or you have political positions you've accepted morality.  You can't have it both ways.



Please, sir, can I have some more?



Perhaps the most common response I get to utilitarianism, however, is a combination of two statements.  The first, roughly speaking, is "then why aren't you just high all the time?"; the second, roughly, is "then why aren't you in Africa helping out poor kids?"  Yes, these two statements are contradictory; and yes, I often hear them together, at the same time, from the same people.  I'm going to ignore the first statement because it's just wrong, and instead  focus on the second.

Imagine that you lived in a universe where the marginal utility of your 5th hour of time after work was greater volunteering at a soup kitchen than hanging out with friends.  (It's probably not too big a stretch of your imagination.)  Well, it's probably also true of the fourth hour after work.  And the third...  At some point you might start to loose sanity from working/volunteering to much and/or your productivity might significantly decline, but until that point it seems that utilitarianism says that if you've decided that some of your time--or money--can be better spent on others than on yourself, well, then, why not more of it?  Why not all of it?

I'm going to write much more on this later, but for now I have two points.  The first is that this, truly, is why people aren't utilitarians: in the end what scares people most about utilitarianism is that it encourages selflessness.  And the second point is that it would be really weird if a philosophy held that selfishness was good for the wold.  Yes, of course dedicating some of your life to making the world a better place is good, and of course donating more is better.  Defecting on prisoner's dilemmas is bad, and cooperating is good.

I don't, of course, believe that everyone does or ever really will act totally selflessly and totally in the interest of the world, but to argue that they shouldn't is sacrificing an almost tautologically true statement in an effort to reclaim the possibility that you're acting "well enough".

Next up: taking issue with decision theories.




Tuesday, July 17, 2012

Utilitarianism, part 3: Classical, Act, One Level

This is part three in a series of posts on utilitarianism.  For an introduction, see part one.  For a discussion of total vs. average utilitarianism, see part two.

___________________________________



In this post I'm going to address a number of divisions in utilitarianism that don't require quite so much space as total vs. average utilitarianism.



High/low pleasure vs. Hedonistic utilitarianism


High/low pleasure or two-level utilitarianism is the thought that some forms of pleasure are intrinsically better than others, independent of their function for society.  Hedonistic utilitarianism rejects this claim.

But why invent this distinction in the first place?  In general adding new rules to a philosophy should be looked upon critically: they often just serve as ways for people to save themselves from the consequences of their beliefs, and this is a perfect example of it.  I cannot help but note that in high/low pleasure utilitarianism, the "higher" form of pleasure--the more important one--is intellectual pleasure, and that it is a philosophy invented by intellectuals.  It would be like if I created "Starcraft utilitarianism", which is like normal utilitarianism but pleasure gained playing the Starcraft II computer game is weighted more heavily--a relatively transparent attempt to justify my current activities as worthwhile.  This is not to say, of course, that intellectuals are bad for the world--much of society's advancement is due to them--but they should have to justify their work and lifestyle on their own merits, not by inventing philosophies that arbitrarily favor what they do with their life.

This is all, of course, putting aside the issue of defining what exactly an intellectual pursuit is.

I am a hedonistic utilitarian.


Act vs. Rule utilitarianism


Act utilitarianism is what you usually think of as utilitarianism: the philosophy that you should try to maximize utility.  Rule utilitarianism, roughly, states that when evaluating a rule--for instance, a possible law, or maybe a standard like "bike to work instead of driving" (depending on the particular interpretation of rule utilitarianism), you should evaluate the rule on utilitarian grounds..  It suffers from two quite obvious flaws.  First, what exactly is a rule?  If it's defined as a possible algorithm to follow, then it just reduces to act utilitarianism (e.g. take the set of "rules" designed to produce actions corresponding to the set of possible universes).  Otherwise it's going to be impossible to define rigorously, or even semi-rigorously.  Second, why would you think that utilitarianism would be good for evaluating laws, but not everyday decisions?

This is not to say, of course, that whenever you have to make a decision you should get out a pen and paper and start adding up the utility of everyone in the world.  In many everyday situations the act utilitarian thing to do is to create a (not fully well defined) rule and follow it in unimportant situations to save your brain power for more important things.  When I try to decide how to get to campus I don't spend time calculating the marginal effect that my driving would have on traffic for other drivers; I assume it'd probably make it worse, and that that'd probably decrease their utility.  I also don't calculate the effect on global warming, etc: instead I understand those are all things that incentivize biking over driving, and so as a general matter just bike to campus and spend my time and brain power thinking about much more pressing issues that the world needs to confront, like Tim Lincecum's ERA.  Similarly I would generally be in favor of laws that incentivize biking over driving so as to discourage defections on prisoner's dilemmas, so long as the laws were well designed.  But this is not an argument for rule utilitarianism as a philosophy, just that sometimes it would argue for similar things as act utilitarianism.

For all of  those reasons, I am an act utilitarian.

Classical utilitarianism vs. Negative utilitarianism


The terminology is a bit confused here, so to clarify by this I mean: what is the relative weighting of pain and pleasure?  Classical utilitarianism does not tend to make much of a distinction between the two (or, if you wish, weights them equally), whereas negative utilitarianism weights pain more heavily: sometimes just by an extra factor (i.e. h(p) = Happiness(p) - k*Pain(p) for some k > 1), and sometimes infinitely more, i.e. reducing suffering is always more important than increasing happiness.  Still others try to split the difference between these two types of negative utilitarianism and weight all pain above some threshold infinitely, while pain below it equally the pleasure.

The strictest form of negative utilitarianism is clearly silly; it implies that if anyone anywhere is suffering, than no pleasure for anyone anywhere matters; the threshold version also has this problem to the extent that the threshold is crossed.  In addition neither of these are likely to be linear.

So, I'll just look at the weakest version: that h(p), the happiness of a person, is Happiness(p) - k*Pain(p) for some constant k > 1.  There are a number of problems with this:

1) Are happiness and pain calculated separately for each person, or first added together?  I.e.: say p, at a given point, is feeling 10 utils of happiness and 20 utlils of pain, and say k=2.  Should you be calculating h(p = 10-20*2=-30, or h(p) = h(10-20) = k*(-10)=-20?  The problem with the first version is that it is not going to be linear in how you view an individual's emotions: if you consider all emotions about an upcoming interview as one set of emotions, and everything else as another set, you'll get a different answer than if you consider everything about their hunger as one set  of emotions and everything else as another.  So, let's say that you're going with the second version: then what you're really saying is that you care more about net unhappy people than net happy ones.

2) k is totally arbitrary; why is pain twice as bad as pleasure is good?  Why not 10 times?  Why not 1.001 times?  For that matter, why not 0.7 times?

3) Why do you care about pain more than pleasure in the first place?  The reason, I think, is that you're grabbing on to the following fact about people: it's a lot easier to be really really unhappy than it is to be really really happy; there is just about no experience in the world as good as being tortured is bad.  But that fact will already be reflected in classical utilitarianism; naturally someone who is being tortured will have much more negative utility than someone who won the lottery will have positive utility because that's how humans work, and so the correct actions will care about them more; introducing k is just double counting for that.

For all of these reasons, I am a classical utilitarian.

Monday, July 16, 2012

Some calculations about Tim Lincecum

Tim Lincecum is a starting pitcher for the San Francisco Giants.  For the first five years  of his career he was one of the best pitchers in the majors, with a cumulative ERA of 2.98.  This year, however, he has been atrocious, with an ERA currently at 5.93.  I decided to take a look at pitch-by-pitch data to see if I could make anything of it.

I noticed that he had an unusually large difference between home and away ERA--3.43 at home, but 9.00 on the  road.  Coupled with the source of home field advantage, I decided to investigate something: could his difference in play this year come from umpires restricting his strike zone?

As it turns out, no.  In both 2011 and 2012 about 10% of his pitches that batters didn't swing at were balls miss-classified as strikes by the umpires, and about 2.5% were strikes that were called balls; umpires were no harsher this year than last.

I then took a look at placement.  In particular, all else equal better pitches are generally around the edge of the strikezone, and worse pitches are generally either right down the middle or way outside the strikezone.  So, I decided to look at the average distance from his pitches to the vertical and horizontal edges of the strikezone; this time there was a difference.  In 2011 the sum of the vertical and horizontal misses from the edges of the strikezone averaged .923 feet; in 2012 it averaged .961 feet.  It doesn't seem like a huge difference, but it is statistically significant, with just a 2% chance of occurring randomly (having a t-test value of -3.23).  So, it does seem like his control is down from last year.

I also looked at the velocity of his pitches.  There's been a lot of talk about his velocity decline; the decline, as it turns out, is real but not that precipitous: his fastballs and sliders have slowed down by about a mile per hour on average, though his changeup and curveball are still at roughly the same speeds there were in 2011.

So Lincecum's pitches are slower and less controlled than last year; in the end it just looks like Lincecum is pitching worse this season.

Is there anything else I should look at?

Sunday, July 15, 2012

Utilitarianism part 2: Total, Average, and Linearity

This is the second in a series of posts about utilitarianism.  The first is here.  Before I get started, though, there's one definition I'd like to make: a philosophy is an algorithm that orders all possible universes from best to worst; the ordering has to be transitive, reflexive, and any two universes have to be comparable.

______________________________________




One of the most contentious issues in intra-utilitarianism debates is how to aggregate utility between different people.  For the sake of this post I will put off discussions of negative vs. classical utilitarianism, high/low pleasure vs. hedonistic utilitarianism, and other distinctions within the measurement of one individual's utility; I'll discuss those in later posts.  I'm also going to postpone discussion of the repugnant conclusion to a later article, though it is relevant to this one.  So, for now assume we have some (as of now only relative) utility function h(p) which takes one person and spits out their utility, and we want to find some function H(w) which takes the world and spits out a utility of the world.

There are two canonical ways to construct H from h.  The first, total or aggregate utilitarianism, is to just total up the happiness of everyone: H(w) = Sum(h(p)) for all p in w.  The second, average utilitarianism, is to average h(p) for every person p: H(w) = (Sum(h(p)) for all p in w)/(population of  w).

Defining Zero

I define zero utility to be the utility of anyone who does not feel anything--for instance, a dead person.  For a longer description of why, see the bottom of the post*.


Problems with Average Utility


There are a number of problems that come up, though, if you try to use average utilitarianism.  I'm going to start by giving a few thought experiments, and then talk about what  it is about average utilitarianism that leads to these conclusions.

The separate planets distinction

First, say that you have to choose between two possible worlds: one with 10,000 people with utility 2 and 100 with utility 3, and another with just the 100 people with utility 3.  An average utilitarian would have to choose option two, even though it just involves denying life to a bunch of people who would lead reasonably happy lives.  But perhaps in greater numbers you could try to defend this; the world would have only really happy people in the second scenario.

Alright, then.  Say that that there are two planets, planet A and planet B.  The two planets are separated by many lightyears and neither are ever going to interact.  Planet A has 10,000 people each with utility 2; Planet B has 100 people with  utility 3.  You're a being who is presented with the following option: do you blow up planet A?  Say that you're relatively sure that if you let the planets continue their utilities will remain as they are now.  You see the problem, now.  This is a lot like the first scenario, but here clearly planet A is better off around; it's a happy planet that's not hurting anyone else.  But if you're computing average utility of the universe it's decreasing it, and an average utilitarian would want to blow it up.

Ok, you say, what if I just treat them as two different universes, average each individually, and then make decisions separately for them?  The thing is, I can vary scenarios to be between the two listed above: maybe there is one planet with two countries, A and B, with the same populations as the  planets in the above scenario.  Then what you do?  How about different families that live near each other but won't ever really interact that much?

The happy hermit

Alright, now say we're back to earth.  You've done some careful studying and determined, reasonably, that the average human utility is about 1.3 utils.  You go hiking and discover, in the middle of a rocky crevice, a hermit living alone in a cabin; no one has visited him for 50 years.  You talk to him about, and he seems reasonably happy.  You use your magic Utility Box and find out that his utility is at 0.8 utils: positive, though not as happy as the average human.  He enjoys his life in the mountains.  You have a gun.  Do you kill him (assuming that  it wouldn't cause you psychological harm, etc.)?  An average utilitarian would.

___________________



Both of these thought experiments exploit the same flaw in average utilitarianism: it's not linear.  Here's what that means.  Say there's a universe U composed of two sub-universes u1 and u2.  H(U) does not equal H(u1)+H(u2).  What this means is that if you take a universe, its average utility will depend on how you split  it up when you're doing your math.  In the separate planets scenario, it mattered whether you considered the two planets, or countries, or families, together or separately.  In the hermit example, it mattered whether you considered the world as a whole or the rest of the world, and the hermit separately.

This is a rather fatal flaw for a philosophy to have; it shouldn't matter whether you consider non-interacting parts together or separately, and it also shouldn't matter exactly how much interaction it takes to be, you know, like, intertwined and stuff.

There's another way to look at the flaws with average utilitarianism, though: in average utilitarianism when you're considering the impact of someone on world utility, you look at not just how happy they are and how happy they make other people, but also how happy people are independently of them and would be whether or not the person in question existed, and how many other people there are.  In other words, as Adam once put it, in some sense the problem with average utilitarianism is that it's not total utilitarianism.

And so I am a total utilitarian.

_________________________________________________________________________________

*If you're an average utilitarian it doesn't matter if you offset all utilities by a given amount; it won't change comparisons between any two options: if (Sum(h(p)) for all p in w1)/(population of  w1) > (Sum(h(p)) for all p in w2)/(population of  w2), then (Sum(h(p)+k) for all p in w1)/(population of  w1) > (Sum(h(p)+k) for all p in w2)/(population of  w2).  If you're a total utilitarian, though, it does matter.  So, what's zero utility?  In other words, what's an example of someone whose utility is completely neutral; who feels neither happiness nor pain?  A dead  person.  (Or, perhaps, an unconscious person.)  This leads to the natural zero point for utility: h(p)=0 means that p is as happy as an unfeeling and/or dead person; as happy, in other words, as a rock.  This definition turns out to be quite necessary.  If you put the zero point anywhere else then you have to decide which dead people to include in your calculations; they're providing non-zero utility and so will affect the utility of various possible universes.  Alright, you say, I won't include any dead people.  Well how about people in vegetative states with no consciousness, happiness, or  pain?  How about fetuses before they've developed the ability to feel pain?  How about a fertilized egg?  How about an unfertilized one?  How about someone who was shot by a gun and is clearly going to die and has lost all brain function, but it's not clear at what point the doctor standing around him is going to pronounce him dead?  The point is that all of  these people clearly don't contribute to the total utility of the world, and so shouldn't influence calculations; furthermore, exactly how we decide when someone is "dead" or "basically dead" shouldn't influence it.  So it is necessary to define h(p_d)=0 for any unfeeling and/or dead person p_d.

Note, also, that neither average nor aggregate care about the units used for happiness; multiplying all utilities by a constant doesn't change anything.  So, I'll measure utility in units of utils, though I'll generally omit unit labels.

Beauty in Games


“There can be as much value in the blink of an eye as in months of rational analysis.”
-Malcolm Gladwell

One of the trends I’ve noticed in games I’ve played a lot of is that there tends to be a concept of “beauty” that is reflected in the way the players think and talk about the games they play. What do they mean when they say a particular position or aspect of a position is “beautiful”? I’m not completely sure, even though I use this kind of language a lot as well. I have some ideas, though. (That’s why I’m writing a blog post about it)

In games, it’s common to draw a distinction between “tactics” and “strategy.” “Tactics” refers to calculated sequences of specific moves, whereas “strategy” refers to larger patterns within the game. In essence “strategy” is a approximation for “tactics” over the long-term; it is useful when calculating enough moves in advance to make a decision becomes intractable.

In many games, when I am in a situation where calculation is too difficult, I find that I instinctively try to make my position as “beautiful” as possible. I therefore associate beauty more with strategy than with tactics, since it’s a thing I use when I don’t want to think tactically. That said, this isn’t cut-and-dried by any means; I have seen plenty of tactical sequences referred to as “beautiful.”

So what is beauty, then? All I’ve said so far is that I seek it out when I’m too lazy to calculate. Well, here’s an example:



Several times in chess clubs I’ve heard people refer to specific chess pieces as “sexy.” If you want to see an example of a sexy piece, the black bishop in the upper right is a prime example. Look at that monster! Not only is it putting pressure on the pawn on c3, it’s preventing the pawn from moving, since if White pushed the pawn he would put in his rook in danger. In order to fully deal with the threat, White would need to move his rook and then push his pawn, which would take two moves, and even then the bishop would remain very powerful.

It’s difficult to say exactly what makes a piece like that bishop look quite so good to a chess-player. Of course, the bishop’s developed, and it’s attacking a pawn, but it’s more than that; there’s something peculiarly beautiful about the bishop that makes applying the descriptor “sexy” to it seem not wholly inappropriate.

Experienced chess-players are likely to appreciate that bishop more than inexperienced players; additionally, they’re likely to appreciate it much more than computers will, despite the fact that computers will beat them at chess every time.

Chess masters almost universally accept that there is “beauty” in chess positions or aspects of it, but they have incredible amounts of trouble teaching this concept to the computer programs they write to play the game. Typical position evaluation algorithms depend on a move search many moves deep (about 10 or 20 depending on the position) and then a crude evaluation at the end. (That’s tactics, followed by strategy. Check it out) The crude evaluation at the end tends to assign a value to each piece, such as 1/pawn, 3.5/bishop and knight, 5/rook, 9/queen, and then an extra quarter-point per square of mobility available to the pieces. In the position I described, the crude evaluation would assign to Black’s dark-squared bishop a value inferior to her light-squared bishop, since it only defends 6 squares, whereas the other defends 7! And yet it’s clear that the dark-squared bishop is not only the better bishop, but the MVP of Black’s position.

Of course, this wouldn’t cause too many problems for the computer, since it can just look so far ahead that it will see the tactical benefits to Black’s sexy bishop. But it still feels like the computer has nonetheless missed the “point,” somehow.

This gets brought home harder in games where there are more moves available, so looking far ahead is harder, and patterns of “beauty” are harder to predict algorithmically. In the game of Go, for example, there are hundreds of moves available to each player at each move, and there is no approximation for beauty like “mobility” that gives the computer some idea of what pieces (or stones, in go) are really contributing. This makes computers much worse at go than at chess. Take this problem:



 One of these corner formations is strong and efficient, and of them is weak. Which is which? If you only know the rules of go, or if you know the rules and have played a few times but not enough to have learned which one is better (or to have been told), this problem is impossibly hard. The frustrating thing about being a beginner in these kinds of games is that you haven’t yet come to learn what makes a thing beautiful or not; experienced players will know at a glance that you have made a bad move when it would have taken you minutes (or sometimes, years) of calculation to tactically verify that your move was poor.


We can even see concepts like these in games like Starcraft, where it is often necessary to instantly know which of two armies is going to win a battle. Believe it or don’t, but I have heard people refer to Starcraft units, games, and positions as “beautiful,” and I definitely think in those terms a fair amount when I play the game.   


 Which army has the advantage? A Starcraft beginner will have difficulty guessing, but somebody who has played many games will know at a glance because she will have seen scenarios similar to this one many times before.


Beauty in games is a way for experienced players to describe their intuition about a position. It comes up whenever experience allows players to make snap judgments. But at the same time, it seems to me like it’s more than that, more than a strategic shortcut. Game beauty seems to me to deserve equal respect to “real” beauty, the kind we appreciate when we listen to a good song or admire a piece of art. It’s sad, though, that unlike in music, where beauty can be appreciated by everyone, appreciation of beauty in games is restricted to those who play. (Excessively?)












Don’t be fooled by the chess. This entire blog post was really just a long-winded, poorly-reasoned explanation for why playing a lot of Starcraft makes me an artist.

An experienced Starcraft player will recognize this as the widely feared “Six Pool” strategy.


For those who are curious, in the go example, the formation on the left is the strong one; a connection between two stones of two diagonal squares is weak. In the Starcraft example, the Zerg player (Red) has the advantage; zerglings are strong against stalkers and immortals, and there aren't enough stalkers to deal with the threat of the mutalisks.


-Adam


Contributors