TL;DR: (M = meta EA organization, D = direct work; A = animals, X = xrisk, P = global poverty, G = general/other.)
Largest:
The Center for Effective Altruism (M/G)
80,000 Hours (M/G)
Medium:
The Humane League (D/A)
The Future for Humanity Institute (D/X)
Small:
Animal Charity Evaluators (M/A)
Against Malaria Foundation (D/P)
The Good Food Institute (D/A)
The Machine Intelligence Research Institute (D/X)
Large Donations
This year I expect my largest donations to be to CEA and 80K. This reflects my general excitement about the potential of meta Effective Altruism organizations.
Center for Effective Altruism
The Center for Effective Altruism (CEA) is an Oxford (and now Bay Area) effective altruist organization that has been instrumental in organizing the EA community. It has filled a number of different roles over the years. Many of the organizations within the EA movement, including 80,000 Hours, Animal Charity Evaluators, Giving What We Can, and the Global Priorities Project, are outgrowths of CEA. This alone makes it partially responsible for, respectively, the primary EA career advice organization, the primary source for animal welfare charity recommendations, over $1 billion of lifetime donation pledges, and the primary interface between EA, governments, and policy. CEA also works closely with the Future of Humanity Institute, one of the top sources of research and coordination in the AI safety field. CEA has also handled PR for much of the EA movement, been one of the primary drivers of growth, helped to organize EA conferences, helped to raise a ton of money for the movement, and done crucial behind-the-scenes work to make sure that needs in the EA movement are being filled. CEA was recently accepted to Y-Combinator's nonprofit section. I personally have a ton of respect for CEA's leadership.
CEA has so far raised about $800K this year and is targetting about $3M. You can donate here.
80,000 Hours
80,000 Hours (80K) is an EA career advice service. Their main role in the EA community is to help promising college students figure out which careers they can do the most good in. They have helped thousands of impressive students to find jobs working directly for EA organizations, become promising AI researchers, find influential jobs in politics, find particularly good jobs earning to give, and pledge to donate significant amounts of money. They have generally helped to grow the EA community through outreach to students and, sometimes, wealthy donors. They personally helped me with my career decision, and have some pretty impressive stats about their cost effectiveness. They recently participated in Y-Combinator as a nonprofit.
80K is targetting a budget of about $1.5M. You can donate here.
For what it's worth, CEA and 80K are generally targeting a donation ratio of $2 to CEA for each $1 to 80K; I plan to support in roughly that ratio.
Medium Donations
My medium donations for 2016 are The Humane League and the Future of Humanity Instutite. While I don't personally think they will do as much with the marginal dollar donating this year as 80K or CEA, they are both impressive organizations doing a lot of good.
The Humane League
The Humane League (THL) is a farmed animal welfare organziation. THL has had huge impacts in corporate campaigning, movement growth, and leafleting, and has also helped make the animal welfare movement more data-driven. They're victories in getting large fractions of US hens to live in cage-free environments through corporate campaigning are particularly impressive. Like CEA and 80K, I am impressed by its leadership. They've been a consistent force for sensible, smart, and effective animal welfare approaches.
You can donate to The Humane League here.
The Future of Humanity Institute
The Future of Humanity Institute (FHI) is an oxford-based research institute that's working on issues concerning the long-run future of the world, particularly AI existential risk. FHI has been the base for a number of important projects in the x-risk space, including Nick Bostrom's work and a lot of progress coordinating between AI safety researchers, industry, academics, and governments. I would consider making a larger donation to FHI except that I'm not sure they're currently very funding constrained. Like THL for animals, FHI has been a consistent force for reasonable, productive work in the AI x-risk community.
You can donate to FHI here.
Small Donations
I'll be giving small donations this year to MIRI, Animal Charity Evaluators, the Good Food Institute, and the Against Malaria Foundation. I don't think any of them are the best uses of money right now but would like to cast a vote of support for what they do.
Animal Charity Evaluators
ACE researches charities working on improving animal welfare and attempts to find the best. I don't think ACE is particularly funding constrained right now, but it has served and important and neglected role in the EA community, and has a record of choosing very impressive charities that are leading the way for an effective, rational animal welfare movement. You can donate to ACE here.The Against Malaria Foundation
AMF is an organization that coordinatins mosquito-repelling bednets in Africa to help prevent malaria. AMF has, for many years, been possibly the most effective global health charity in terms of lives saved per dollar (currently estimated to be somewhere around $4,000 per life, though the estimates are sensitive to what assumptions you make). You can donate to AMF here.
The Good Food Institute
GFI is a newer charity which is helping to promote the developement and adoption of plant-based and/or cultured (i.e. grown in a lab) meat replacements. I know relatively little about GFI but am excited about the potential of meat replacements to end factory farming, and am working partially off of ACE's recomendation. You can donate to GFI here.
The Machine Intelligence Research Institute
MIRI is an organization doing technical research on AI safety. I think (but am not sure!) that FHI's approach to x-risk is probably the more important one right now, and am uncertain of the usefulness of MIRI's output, but MIRI is one of the few places dedicated to technical AI x-risk work right now and was one of the early forces popularizing the idea. You can donate to MIRI here.