I might well donate to this. You've got a good framework, which is that long-run impacts are important but tough to know. I agree with investigating all five of these topics and with changing institutions to address unknown future risks. That seems at least as likely to work as direct mitigation of known ones. Your comment on the relative importance of different kinds of meta-research for the far future also seems spot on.
Some smaller points:
I'm with you on immigration but for different reasons. I don't see why increasing GDP is particularly great for maximizing long-run welfare since as Nick Bostrom says in his existential risk paper, what we really want to optimize for is safety. So my guess is that immigration's biggest impact would be increasing good-faith cooperation between different countries to avoid dangerous unilateral initiatives, rather than boosting human capital.
http://www.nickbostrom.com/papers/unilateralist.pdf
Some other things I think might be worth looking into are
1. Not only foresight but methods for communicating whatever is found to policymakers, and in democratic countries, the public. There might arise situations where we can predict outcomes but only a few people know and are ineffective. I happen to be thinking here about embryo selection. In general, I hope that as Al Gore writes in his book "The Future," we are able to "steer" and especially steer technological changes to suit current priorities instead of just having them drop into our laps out of nowhere. Or as Paul Christiano says, we should increase the influence of human values over the far future. This is contrast to Robin Hanson who has actually written that voter foresight is bad.
http://www.overcomingbias.com/2011/01/against-voter-foresight.html
2. Lowering barriers to international trade and maybe promoting democracy because democratic countries tend to be more peaceful and internationally cooperative. But there might already be a lot of money flowing toward this.
http://longnow.org/seminars/02012/oct/08/decline-violence/
3. Whether we can really expect, in the case of AI, any current actions to persist into whatever future world could create a potentially dangerous, self-sufficient AI civilization. In other words, we already face high uncertainty about the efficacy of altering the political landscape right now or in the near future. The "track" leading to AI seems hugely volatile, adding a whole new layer of haze. This suggests to me that no action is now justified on this.
Lastly, as a practical issue if you did make an organization I would hope it could avoid taking a clear stance on the transhumanist vs. bioconservative question, since for me that might be a deal breaker, in contrast to the above. Unfortunately this is why I don't donate to FHI.
Quantifiable vs. Not (currently)
although it might be better to say
Decent Estimate vs. I (We?) Have Not Even a Qualitative Clue
The latter does seem to contain (the most?) important causes, so winnowing the former is limited as a strategy
Second, rankings of quantified causes may not be obvious since you must decide, say, between nearby and far-away folks. But sane rankings will overlap and so having two categories seems sound.
I would be interested if anyone has ideas on how to rate the effectiveness of political advocacy.
and there's Paul Christiano's distinction between human values and more specific values. Goals can be value-dependent in general but still accepted by a large majority of people alive today. Political causes can be divided into those that attempt to further human values by telling people facts and those that just further specific values. Unfortunately, it seems that the most obviously effective measures push specific values.
I like the main point here. I'd suggest that having a series of concentric "rings" around yourself for local, regional and global charity is in a sense more "logical" than an arbitrary discrete jump from spending money on yourself to global charity. But a counterargument would be that people just don't think like this and in practice things work out to a dichotomy of me vs. not me.