Wednesday, August 12, 2015

Multiplicative Factors in Games and Cause Prioritization

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that it's important to heavily support multiple causes, not just to focus on the most appealing one.

-----------


Part of the effective altruism movement was founded on the idea that, within public health charities, there is an incredibly wide spread between the most effective and least effective.  Effective altruists have recently been coming around to the idea that at least as important is the difference between the most and least effective cause areas.  But while most EAs will agree that global public health interventions are generally more effective, or at least have higher potential, than supporting your local opera house, there's a fair bit of disagreement over what the most effective cause area is.  Global poverty, animal welfare, existential risk, and movement building/meta-EA charities are the most popular, but there are also proponents of first world education, prioritization research, economics, life extension, and a whole host of other issues.

Recently there's been a lot of talk about whether one cause is so important that all other causes are rounding errors compared to it (though some disagreement over what that cause would be!).  The argument, roughly goes: when computing expected impact of causes, mine is 10^30 times higher than any other, so nothing else matters.  For instance, there are 10^58 future humans, so increasing the odds that they exist by even .0001% is still worth 10^44 times more important that anything that impacts current humans. Similar arguments have been made where the "very large number" is the number of animals, or the intractability of a cause, or moral discounting of some group (often future humans or animals).

This line of thinking is implicitly assuming that the impacts of causes add together rather than multiply, and I think that's probably not a very good model.  But first, a foray into games.

Krug Versus Gromp


Imagine that you're playing some game against a friend.  You each have a character--yours is named Krug, and your opponents' is named Gromp.  The characters will eventually battle each other, once, to the death.  They each do some amount of damage per second D, and have some amount of health H.  They'll keep attacking each other continuously until one is dead.

If they fight, then Krug will take H_g / D_k seconds to kill Gromp, and Gromp will take H_k / D_g seconds to kill Krug, with the winner being the one who lasts longer.  Multiply through by D_g*D_k, and you get that the winner is the one who has the higher D*H--what you're trying to maximize is the product of damage per second, and health.  It doesn't matter what your opponent is doing--there's no rock, paper, scissors going on.  You just want to maximize health * damage.

Now let's say that before this fight, you each get to buy items to equip to your character.  You're buying for Krug.  Krug starts out with no health and no damage.  There are two items you can buy: swords that each give 5 damage per second, and shields that each give 20 health.  They both cost $1 each, and you have $100 to spend.  It turns out that the right way to spend your money is to spend $50 buying 50 swords, and $50 buying 50 shields, ending up with 250 damage per second, and 1,000 health.  (You can play around with other options if you want, but I promise this is the best.)

The really cool thing is that your money allocation is totally independent of the cost of swords and shields, and how much damage/health they give.  You should spend half your money on swords and half on shields, no matter what.  If swords cost $10 and gave 1 attack, and shields cost $1 and gave 100 health, you should still spend $50 on each.  One way to think about this is: the nth dollar I spend on swords will increase my damage per second by a factor of n/(n-1), and the nth dollar spent on shields will increase my health by n/(n-1).  Since all I care about is damage * health, I can just pull out these multiplicative factors--the actual scale of the numbers don't matter at all.

This turns out to be a useful way to look at a wide variety of games.  In Magic, 4/4's are better than 2/6's and 6/2's; in League of Legends, bruisers win duels; in Starcraft, Zerglings and Zealots are very strong combat units.  In most games, the most powerful duelers are the units that have comparable amounts of investment in attack and defense.

Sometimes there are other stats that matter, too.  For instance, there might be health, damage per attack, and attacks per second.  In this case your total badassery is the product of all three, and you should spend 1/3 of your money on shields, 1/3 on swords, and 1/3 of caffeine (or whatever makes you attack quickly).  In general most combat stats in games are multiplicative, and you're usually best off spending equal amounts of money on all of them, unless you're specifically incentivized not to (e.g. by getting more and more efficient ways to buy swords the more you spend on swords).  In general, when factors each increase linearly in money spent and multiply with each other, you're best off spending equal amounts of money on each of the factors.  Let's call this the Principle of Distributed Power (PDP).


Multiplicative Causes


So, what does this have to do with effective altruism?

I think that, in practice, the impacts of lots of causes multiply, instead of adding.  For instance, I think that a plausible way to view the future is that expected utility is X * G, where X is the probability that we avoid existential risk and make it to the far future, and G is the goodness of the world we create, assuming we succeed in avoiding x-risk. By the Principle of Distributed Power, you'd want to invest equal amounts of resources on X and G.  But within X there are actually lots of different forms of existential risk--AI, Global Warming, bioterrorism, etc.  And within G, there are lots and lots of factors, each of which might multiply with each other--technological advancement, the care with which we treat animals, ability to effectively govern ourselves, etc.  And the PDP implies that our prior should be to invest comparable resources in each of those terms.

The real world is a lot messier than the battle between Krug and Gromp.  One of the big differences is that the impact of work on most of these causes isn't linear.  If you invest $1M in global warming x-risk maybe you reduce the odds that it destroys us by .01%, but if you invest $10^30 clearly you don't decrease the odds by 10^28%--the odds can't go below 0.  Many of these causes have some best achievable outcome, and so at some point you have to have decreasing marginal utility of resources.

Another difference is that we're not starting from zero on all causes.  The world has already invested billions of dollars in fighting global warming, and so that should be subtracted from the amount that's efficient to further spend on it.  (If you start off with $100 already invested in swords, then your next $100 should be invested in shields before you go back to splitting up your investments.)

In practice, when considering causes that multiply together, the question of how to divide up resources depends on how much has already been invested, where on the probability distribution for that cause you currently think you are, and lots of other practicalities.  In other words, it depends on how much you think it costs to increase your probability of a desired outcome by 1%.

But as long as there are other factors that multiply with it, a factor's importance transfers to them as well.  Which, in some cases, is a fact long ago discovered: the whole reason that x-risk is important is because of how immensely important the future is, which is equally an argument for improving the future and for getting there.

None of this proves anything.  But it's significantly changed my prior, and I now think it's likely that the EA movement should heavily invest in multiple causes, not just one.

I've spend a lot of time in my life trying to decide what the single most important cause is, and pissing other people off by being an asshole when I think I've found it.  I also like playing AD carries.  But my winrate with them isn't very high.  Maybe it's time to build bruiser.




Contributors