Monday, December 26, 2016

2016 Donations

In case anyone is interested, here's where I'm giving in 2016.  These reflect a fair bit of thought and communication with people in the EA community, but I haven't put nearly as much thought into it has have people working fulltime thinking about donations.  For other perspectives, you can check out the recommendations/grants of the Open Philanthropy Project, GiveWell, Animal Charity Evaluators, and Giving What We Can.  Disclosure: I'm on the board of some of the charities I'm giving to, and am friends with the people running many of them.

TL;DR: (M = meta EA organization, D = direct work; A = animals, X = xrisk, P = global poverty, G = general/other.)

Largest:
     The Center for Effective Altruism (M/G)
     80,000 Hours (M/G)
Medium:
     The Humane League (D/A)
     The Future for Humanity Institute (D/X)
Small:
     Animal Charity Evaluators (M/A)
     Against Malaria Foundation (D/P)
     The Good Food Institute (D/A)
     The Machine Intelligence Research Institute (D/X)
   

Large Donations


This year I expect my largest donations to be to CEA and 80K.  This reflects my general excitement about the potential of meta Effective Altruism organizations.

Center for Effective Altruism

The Center for Effective Altruism (CEA) is an Oxford (and now Bay Area) effective altruist organization that has been instrumental in organizing the EA community.  It has filled a number of different roles over the years.  Many of the organizations within the EA movement, including  80,000 Hours, Animal Charity Evaluators, Giving What We Can, and the Global Priorities Project, are outgrowths of CEA.  This alone makes it partially responsible for, respectively, the primary EA career advice organization, the primary source for animal welfare charity recommendations, over $1 billion of lifetime donation pledges, and the primary interface between EA, governments, and policy.  CEA also works closely with the Future of Humanity Institute, one of the top sources of research and coordination in the AI safety field.  CEA has also handled PR for much of the EA movement, been one of the primary drivers of growth, helped to organize EA conferences, helped to raise a ton of money for the movement, and done crucial behind-the-scenes work to make sure that needs in the EA movement are being filled.  CEA was recently accepted to Y-Combinator's nonprofit section.  I personally have a ton of respect for CEA's leadership.

CEA has so far raised about $800K this year and is targetting about $3M.  You can donate here.  


80,000 Hours

80,000 Hours (80K) is an EA career advice service.  Their main role in the EA community is to help promising college students figure out which careers they can do the most good in.  They have helped thousands of impressive students to find jobs working directly for EA organizations, become promising AI researchers, find influential jobs in politics, find particularly good jobs earning to give, and pledge to donate significant amounts of money.  They have generally helped to grow the EA community through outreach to students and, sometimes, wealthy donors.  They personally helped me with my career decision, and have some pretty impressive stats about their cost effectiveness.  They recently participated in Y-Combinator as a nonprofit.

80K is targetting a budget of about $1.5M.  You can donate here.

For what it's worth, CEA and 80K are generally targeting a donation ratio of $2 to CEA for each $1 to 80K; I plan to support in roughly that ratio.


Medium Donations

My medium donations for 2016 are The Humane League and the Future of Humanity Instutite.  While I don't personally think they will do as much with the marginal dollar donating this year as 80K or CEA, they are both impressive organizations doing a lot of good.

The Humane League

The Humane League (THL) is a farmed animal welfare organziation.  THL has had huge impacts in corporate campaigning, movement growth, and leafleting, and has also helped make the animal welfare movement more data-driven.  They're victories in getting large fractions of US hens to live in cage-free environments through corporate campaigning are particularly impressive.  Like CEA and 80K, I am impressed by its leadership.  They've been a consistent force for sensible, smart, and effective animal welfare approaches.

You can donate to The Humane League here.

The Future of Humanity Institute

The Future of Humanity Institute (FHI) is an oxford-based research institute that's working on issues concerning the long-run future of the world, particularly AI existential risk.  FHI has been the base for a number of important projects in the x-risk space, including Nick Bostrom's work and a lot of progress coordinating between AI safety researchers, industry, academics, and governments.  I would consider making a larger donation to FHI except that I'm not sure they're currently very funding constrained.  Like THL for animals, FHI has been a consistent force for reasonable, productive work in the AI x-risk community.

You can donate to FHI here


Small Donations

I'll be giving small donations this year to MIRI, Animal Charity Evaluators, the Good Food Institute, and the Against Malaria Foundation.  I don't think any of them are the best uses of money right now but would like to cast a vote of support for what they do.

Animal Charity Evaluators

ACE researches charities working on improving animal welfare and attempts to find the best.  I don't think ACE is particularly funding constrained right now, but it has served and important and neglected role in the EA community, and has a record of choosing very impressive charities that are leading the way for an effective, rational animal welfare movement.  You can donate to ACE here.

The Against Malaria Foundation

AMF is an organization that coordinatins mosquito-repelling bednets in Africa to help prevent malaria.  AMF has, for many years, been possibly the most effective global health charity in terms of lives saved per dollar (currently estimated to be somewhere around $4,000 per life, though the estimates are sensitive to what assumptions you make).  You can donate to AMF here.

The Good Food Institute

GFI is a newer charity which is helping to promote the developement and adoption of plant-based and/or cultured (i.e. grown in a lab) meat replacements.  I know relatively little about GFI but am excited about the potential of meat replacements to end factory farming, and am working partially off of ACE's recomendation.  You can donate to GFI here.

The Machine Intelligence Research Institute

MIRI is an organization doing technical research on AI safety.  I think (but am not sure!) that FHI's approach to x-risk is probably the more important one right now, and am uncertain of the usefulness of MIRI's output, but MIRI is one of the few places dedicated to technical AI x-risk work right now and was one of the early forces popularizing the idea.  You can donate to MIRI here.



Wednesday, August 12, 2015

Multiplicative Factors in Games and Cause Prioritization

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that it's important to heavily support multiple causes, not just to focus on the most appealing one.

-----------


Part of the effective altruism movement was founded on the idea that, within public health charities, there is an incredibly wide spread between the most effective and least effective.  Effective altruists have recently been coming around to the idea that at least as important is the difference between the most and least effective cause areas.  But while most EAs will agree that global public health interventions are generally more effective, or at least have higher potential, than supporting your local opera house, there's a fair bit of disagreement over what the most effective cause area is.  Global poverty, animal welfare, existential risk, and movement building/meta-EA charities are the most popular, but there are also proponents of first world education, prioritization research, economics, life extension, and a whole host of other issues.

Recently there's been a lot of talk about whether one cause is so important that all other causes are rounding errors compared to it (though some disagreement over what that cause would be!).  The argument, roughly goes: when computing expected impact of causes, mine is 10^30 times higher than any other, so nothing else matters.  For instance, there are 10^58 future humans, so increasing the odds that they exist by even .0001% is still worth 10^44 times more important that anything that impacts current humans. Similar arguments have been made where the "very large number" is the number of animals, or the intractability of a cause, or moral discounting of some group (often future humans or animals).

This line of thinking is implicitly assuming that the impacts of causes add together rather than multiply, and I think that's probably not a very good model.  But first, a foray into games.

Krug Versus Gromp


Imagine that you're playing some game against a friend.  You each have a character--yours is named Krug, and your opponents' is named Gromp.  The characters will eventually battle each other, once, to the death.  They each do some amount of damage per second D, and have some amount of health H.  They'll keep attacking each other continuously until one is dead.

If they fight, then Krug will take H_g / D_k seconds to kill Gromp, and Gromp will take H_k / D_g seconds to kill Krug, with the winner being the one who lasts longer.  Multiply through by D_g*D_k, and you get that the winner is the one who has the higher D*H--what you're trying to maximize is the product of damage per second, and health.  It doesn't matter what your opponent is doing--there's no rock, paper, scissors going on.  You just want to maximize health * damage.

Now let's say that before this fight, you each get to buy items to equip to your character.  You're buying for Krug.  Krug starts out with no health and no damage.  There are two items you can buy: swords that each give 5 damage per second, and shields that each give 20 health.  They both cost $1 each, and you have $100 to spend.  It turns out that the right way to spend your money is to spend $50 buying 50 swords, and $50 buying 50 shields, ending up with 250 damage per second, and 1,000 health.  (You can play around with other options if you want, but I promise this is the best.)

The really cool thing is that your money allocation is totally independent of the cost of swords and shields, and how much damage/health they give.  You should spend half your money on swords and half on shields, no matter what.  If swords cost $10 and gave 1 attack, and shields cost $1 and gave 100 health, you should still spend $50 on each.  One way to think about this is: the nth dollar I spend on swords will increase my damage per second by a factor of n/(n-1), and the nth dollar spent on shields will increase my health by n/(n-1).  Since all I care about is damage * health, I can just pull out these multiplicative factors--the actual scale of the numbers don't matter at all.

This turns out to be a useful way to look at a wide variety of games.  In Magic, 4/4's are better than 2/6's and 6/2's; in League of Legends, bruisers win duels; in Starcraft, Zerglings and Zealots are very strong combat units.  In most games, the most powerful duelers are the units that have comparable amounts of investment in attack and defense.

Sometimes there are other stats that matter, too.  For instance, there might be health, damage per attack, and attacks per second.  In this case your total badassery is the product of all three, and you should spend 1/3 of your money on shields, 1/3 on swords, and 1/3 of caffeine (or whatever makes you attack quickly).  In general most combat stats in games are multiplicative, and you're usually best off spending equal amounts of money on all of them, unless you're specifically incentivized not to (e.g. by getting more and more efficient ways to buy swords the more you spend on swords).  In general, when factors each increase linearly in money spent and multiply with each other, you're best off spending equal amounts of money on each of the factors.  Let's call this the Principle of Distributed Power (PDP).


Multiplicative Causes


So, what does this have to do with effective altruism?

I think that, in practice, the impacts of lots of causes multiply, instead of adding.  For instance, I think that a plausible way to view the future is that expected utility is X * G, where X is the probability that we avoid existential risk and make it to the far future, and G is the goodness of the world we create, assuming we succeed in avoiding x-risk. By the Principle of Distributed Power, you'd want to invest equal amounts of resources on X and G.  But within X there are actually lots of different forms of existential risk--AI, Global Warming, bioterrorism, etc.  And within G, there are lots and lots of factors, each of which might multiply with each other--technological advancement, the care with which we treat animals, ability to effectively govern ourselves, etc.  And the PDP implies that our prior should be to invest comparable resources in each of those terms.

The real world is a lot messier than the battle between Krug and Gromp.  One of the big differences is that the impact of work on most of these causes isn't linear.  If you invest $1M in global warming x-risk maybe you reduce the odds that it destroys us by .01%, but if you invest $10^30 clearly you don't decrease the odds by 10^28%--the odds can't go below 0.  Many of these causes have some best achievable outcome, and so at some point you have to have decreasing marginal utility of resources.

Another difference is that we're not starting from zero on all causes.  The world has already invested billions of dollars in fighting global warming, and so that should be subtracted from the amount that's efficient to further spend on it.  (If you start off with $100 already invested in swords, then your next $100 should be invested in shields before you go back to splitting up your investments.)

In practice, when considering causes that multiply together, the question of how to divide up resources depends on how much has already been invested, where on the probability distribution for that cause you currently think you are, and lots of other practicalities.  In other words, it depends on how much you think it costs to increase your probability of a desired outcome by 1%.

But as long as there are other factors that multiply with it, a factor's importance transfers to them as well.  Which, in some cases, is a fact long ago discovered: the whole reason that x-risk is important is because of how immensely important the future is, which is equally an argument for improving the future and for getting there.

None of this proves anything.  But it's significantly changed my prior, and I now think it's likely that the EA movement should heavily invest in multiple causes, not just one.

I've spend a lot of time in my life trying to decide what the single most important cause is, and pissing other people off by being an asshole when I think I've found it.  I also like playing AD carries.  But my winrate with them isn't very high.  Maybe it's time to build bruiser.




Monday, December 31, 2012

Pitcher Fatigue, Part 2: The Top 10

Earlier, I wrote a post on the declining effectiveness of starting pitchers as they get deeper into games, postulating that it came from two major sources: the first being the fact that it's difficult to throw 100 pitches in a night without your arm getting temporarily tired, and second that the second time a batter sees a pitcher, they already know what type of stuff the pitcher is throwing and so are better able to hit it.  Overall I estimated that by rotating pitchers frequently each game so that no pitcher went through the lineup more than once, a team could save about 5.6 wins each season (ignoring other effects, like the fact that if you're in the NL you get to pinch hit more often).

Also, starting with this post I'm going to make a conscious effort to switch from using OPS as my default batting stat to wOBA. wOBA, which is on the same scale as on-base percentage, is basically a version of OPS that uses more accurate weightings for events.

______________________________

On average, in 2012, the first time pitchers saw a batter they allowed a wOBA of about 0.338.  The second time they saw those batters, the wOBA jumped to about 0.350, for a difference in wOBA of about 0.011.  I'm going to name this statistic--wOBA for second plate appearances minus wOBA for fist--w-diff.

So the league average w-diff in 2012 was about 0.011.  But different pitchers had different w-diffs.

Look, for instance, at R.A. Dickey.  Dickey is a knuckleballer, and so one would expect hitters to be unusually bad the first time they see him--they have no practice hitting a knuckle-ball--but to get much better the second time, meaning one would expect him to have an unusually large w-diff.  And, in fact, he does have a large w-diff over his career if you ignore all of the seasons in which he didn't have a large w-diff, which is a thing that makes a lot of sense to do if you have a personal vendetta against the year 2011.


Sunday, December 30, 2012

Being a Utilitarian, Part 2: Conventional Charities

This is the second post in a series on actually being a utilitarian in the world; for the first post, look here.  Also, for a more theoretical series on utilitarianism, look here.

______________

So, say that you're a utilitarian, and you're wondering what to do with your life.  (Even if you're not a utilitarian but are wondering what to do with your life, most of this will apply.)  What should you do?  What, in the current society, can an individual do to make the world a better place?  And what causes should you care about?

Is there anything you can do with your life to make the world a better place?


Sunday, December 23, 2012

Less Stupid Use of Pitchers: Pitcher Fatigue

A while ago I wrote a post about one of the most unenlightened areas of baseball strategy: the use of pitchers.  I proposed eliminating the distinction between starting pitchers, middle relievers, and closers in favor of a system that just uses a set of pitchers, each pitching different total numbers of innings, but no single pitcher pitching more than a few innings in a game; in other words, a starter would now throw two innings every few games instead of seven innings every five games.

The advantages of this, as I see it, are four fold.

1) If you're an NL team, you can pinch hit for your pitchers whenever they come up.

2) Pitchers don't have to throw 100 pitches in a game.

3) Batters never get to see the same pitcher twice in a game, and so can't get used to their pitches.

4) You can get the pitcher-batter match-ups you want all the time, instead of being stuck with your same pitcher the first three times through the lineup.

In the first post I estimated the size of effect (1): pinch hitting for you pitcher every time would let you score about 0.2 more runs per game, translating into about 3.2 wins per season (the difference between a .500 team and a .520 team).

Now I'm going to look at effects (2) and (3).

Tuesday, December 4, 2012

Being a Utilitarian, Part 1

I've written a series of posts about the different types of utilitarianism arguing for aggregate, classical, act, one-level utilitarianism.  I haven't, however, talked at all about what it would mean to be a utilitarian in the real world.

In the real world, obviously, you aren't faced with a series of trolley problems or utility monsters.  If you don't think about it very much, you might conclude that utilitarianism isn't actually useful because you can't calculate the total utility of each possible action.

However, as it turns out, utilitarianism can be useful even if you don't know the exact state of the universe.

In future posts I'll examine thornier, more wide-reaching issues, but for now I'll just talk about one issue--the first issue that I actually thought about in utilitarian terms.  For people familiar with utilitarianism it probably won't be that interesting or revolutionary, but it's a good way to remind yourself that just because a theory is complicated doesn't mean approximations can't be useful.  (It also parallels an argument Peter Singer has made on the subject.)


Re-starting the blog, and results of the second contest

As you may have noticed, after a hiatus while the school year started, I'm back to blogging.

First, I never resolved the second contest.  No one solved the puzzle but Matt Nass made partial progress, so he gets 3 Shadow-points.  I'm going to leave the puzzle open and if anyone solves it they get one Shadow-point.  Here's the puzzle again, with a little bit filled in as a hint:

Instructions for the puzzle are here.

Also, I think that weekly was probably too frequent for the contests, so they're going to change to bi-weekly; I'll have another one out soon.

If there's anything you want me to write about, put it in the comments here.

Contributors