RRY

Ross Rheingans-Yoo🔸

232 karmaJoined

Comments
19

The quadratic-proportional lemma works in the setting where there's an unbounded total pool; if one project's finding necessarily pulls from another, then I agree it doesn't work to the extent that that tradeoff is in play.

In this case, I'm modeling each cause as small relative to the total pool, in which case the error should be correspondingly small.

The nice thing about the quadratic voting / quadratic funding formula (and the reason that so many people are huge nerds about it) is that the optional diversification is really easy to state:

  • You should donate in a $X : $1 ratio of org A to org B if you believe that org A is X times as effective as org B (in their marginal use of funding).

One explanation for this is, if you're donating $X to A and $1 to B, then adding one more cent to A increases A's total match by 1/X the amount that B would get if you gave it to B. So the point where your marginal next-dollar is equal is the point where your funding / votes are proportional to impact.

(I have a comment nephew to this one that argues against "politics doesn't belong here", but I also wanted to provide a cautionary suggestion...)

I have found it pretty difficult to think in a balanced way about...any election in the last three cycles...but I want to propose that, on the outside view, "candidate seems Obviously Bad" and "candidate would have a negative counterfactual effect on [classical EA cause area]" is nowhere near as correlated as it intuitively feels it should be.

The example that I will keep pointing to, probably forever, is that George W. Bush was almost certainly the best president in the modern era for both Global Health and Welfare[1] and for GCBRs[2], based on programs that were very far from what I (still) think of as his major policy positions.

I think that Bush's interest in HIV response in Africa was in theory knowable at the time[3], but figuring it out would have required digging into some pretty unlikely topics on a candidate that my would-have-been intellectual circles[4] was pretty strongly convinced was the worse one. (I'm not sure how knowable his proactive interest in pandemic prep was.)

I don't want to claim that it's correct to equivocate this cycle's Republican candidate and W. Bush here, and I don't have any concrete reason to believe that the Republican candidate is good on particular cause areas. I just mean to say, I wouldn't have believed it of W. Bush, either. And in this cycle, I'm not aware of anyone who has really done the reasearch that would convince me one way or another in terms of the shut-up-and-multiply expected counterfactual utility.

So, while I don't oppose making decisions on other-than-consequentialist and/or commonsense grounds here (which is likely what's going to actually sway my ballot as a citizen), I want to argue for a stance of relatively deep epistemic uncertainty on the consequentialist dimension, until I see more focused argument from someone who really has done the homework.


  1. In a word, PEPFAR. ↩︎

  2. The National Pharmaceutical Stockpile was founded under Clinton with $51mln of initial funding, but Bush increased the budget tenfold between the Project BioShield Act and PAHPA; expansion of the program since then has been small by comparison. Plus I think that the effect of PEPFAR on the "biosecurity waterline" of the world is under-appreciated. ↩︎

  3. Wikipedia: "According to [Bush's] 2010 memoir, Decision Points, [George W. and Laura Bush] developed a serious interest in improving the fate of the people of Africa after reading Alex Haley’s Roots, and visiting The Gambia in 1990. In 1998, while pondering a run for the U.S. presidency, he discussed Africa with Condoleezza Rice, his future secretary of state; she said that, if elected, working more closely with countries on that continent should be a significant part of his foreign policy." ↩︎

  4. I was too young to have "intellectual circles" during the GWB presidency; I'm approximating myself by my parents here, though it's conflated by EA, LessWrong, et al. not existing at the time. ↩︎

I think that your argument is a fair one to make, but I think it's easier to argue for than against, so I want to argue against to avoid an information cascade towards consensus.

  1. I generally support the outside-view argument for ideological diversification, and if that diversification means anything it has to mean supporting things that wouldn't "get there" on their own (especially as a small / exploratory fraction of donation portfolio, as OP indicates here).
  2. EA need not be totalizing, and I think the world is better-off if EAs discuss how to bring a mindset towards effectiveness to other endeavors in their lives.
  3. I generally think that we've swung too far towards consensus and OpenPhil deference as a community (though there's been some swing back), and am actively happier to see things that aren't obviously bunk but swing us towards a more pluralistic and diverse set of approaches.
  4. I think in particular that EA donors have historically under-considered opportunities in politics, and am happy to see increased engagement in considering opportunities there (even if I might disagree with the choice of a particular political race as the most effective, like I might disagree with the choice of an approach within a cause area).

What's more, democratic capitalism + effective altruism will direct effort and resources to effective uses even if only a few capital-havers are unselfishly motivated in this way.

If socialism means the command-economy things, then democratic socialism + effective altruism doesn't reliably direct resources to causes that only a small minority are motivated to support.

Makes total sense not to invest in the charitable side -- I'm generally off a similar mind.[1] The reason I'm curious is that "consider it as two separate accounts" is the most-compelling argument I've seen against tithing investment gains. (The argument is basically, that if both accounts were fully-invested, then tithing gains from the personal account to the charity account leads to a total 4:1 ratio between them as withdrawal_time -> ∞, not a 9:1 ratio.[2] Then, why does distribution out of the charity account affect the 'right' additional amount to give out of the personal account?)

Another way to count it is, if you believe that the returns on effective charity  are greater than private investments returns  and so always make donations asap, then tithing  at the start and  after  years is worse for both accounts than just giving say  up-front (and giving  of the further investment gains).

Probably this is most relevant to startup employees, who might receive "$100,000 in equity" that they only can sell when it later exits for, say, 10x that. Should a 10% pledge mean $10,000 up-front and $90,000 of the exit (10% when paid + 10% of gains), or just $100,000 of the exit (10% went to the charity account, then exited)?[3]

(Sorry, don't mean to jump on your personal post with this tangent -- am happy to chat if you find this interesting to think about, but also can write my own post about it on my own time if not

  1. ^

    The one case where I do think investment can sense is where I want to direct the funding to accelerating the program of a for-profit company, eg in biotech, and the right way to do so is via direct investment. I do think there are such cases that can be on the frontier of most-effective in EV terms (and for them I only count it as effective giving if I precommit to re-giving any proceeds, without re-counting it as a donation for pledge purposes).

  2. ^

    Consider receiving $1,000 in salary, splitting it $100 : $900 between the accounts, investing each so they grow 10x and become $1,000 : $9,000, then realizing the personal investment gains and tithing $800 on them. Now the accounts are $1,800 : $8,200, which seems a lot more like "giving 18%" than "giving 10%"!

  3. ^

    If the correct baseline is "10% of the exit", should this be any different from the case of a salary worker who makes the $100,000 in cash and puts it in an index fund until it [10x]s? Or what about a professional trader who "realizes gains" frequently with daily trading, but doesn't take any of the money out until after many iterations?

  1. Thought-provoking post; thanks for sharing!

  2. A bit of a tangential point, but I'm curious, because it's something I've also considered:

putting 10% of my paycheck directly in a second account which was exclusively for charity

What do you do with investment income? It's pretty intuitive that if you're "investing to give" and you have $9,000 of personal savings and $1,000 of donation-investments and they both go up 10% over a year, that you should have $9,900 of personal savings and $1,100 of donation-investments. But what would you (or do you) do differently if you put the money into the accounts, donated half of the charity account, and then ended up with $9,900 in personal savings (a $900 annual gain) and $550 in savings-for-giving (a $50 annual gain)?

I have heard at least three different suggestions for how to do this sort of accounting, but am curious what you go with, since the rest of your perspective self seems fairly intentional and considered!

I'd argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.

I think "act according to that mean probability" is wrong for many important decisions you might want to take - analogous to buying a lot of trousers with 1.97 legs in my example in the essay. No additional comment if that is what you meant though and were just using shorthand for that position.

Clarifying, I do agree that there are some situations where you need something other than a subjective p(risk) to compare EV(value|action A) with EV(value|action B). I don't actually know how to construct a clear analogy from the 1.97-legged trousers example if the variable we're meaning is probabilities (though I agree that there are non-analogous examples; VOI for example).


I'll go further, though, and claim that what really matters is what worlds the risk is distributed over, and that expanding the point-estimate probability to a distribution of probabilities, by itself, doesn't add any real value. If it is to be a valuable exercise, you have to be careful what you're expanding and what you're refusing to expand.

More concretely, you want to be expanding over things your intervention won't control, and then asking about your intervention's effect at each point in things-you-won't-control-space, then integrating back together. If you expand over any axis of uncertainty, then not only is there a multiplicity of valid expansions, but the natural interpretation will be misleading.

For example, say we have a 10% chance of drawing a dangerous ball from a series of urns, and 90% chance of drawing a safe one. If we describe it as (1) "50% chance of 9.9% risk, 50% chance of 10.1% risk" or (2) "50% chance of 19% risk, 50% chance of 1% risk" or (3) "10% chance of 99.1% risk, 90% chance of 0.1% risk", what does it change our opinion of <intervention A>? (You can, of course, construct a two-step ball-drawing procedure that produces any of these distributions-over-probabilities.)

I think the natural intuition is that interventions are best in (2), because most probabilities of risk are middle-ish, and worst in (3), because probability of risk is near-determined. And this, I think, is analogous to the argument of the post that anti-AI-risk interventions are less valuable than the point-estimate probability would indicate.

But that argument assumes (and requires) that our interventions can only chance the second ball-drawing step, and not the first. So using that argument requires that, in the first place, we sliced the distribution up over things we couldn't control. (If that is the thing we can control with our intervention, then interventions are best in the world of (3).)


Back to the argument of the original post: You're deriving a distribution over several p(X|Y) parameters from expert surveys, and so the bottom-line distribution over total probabilities reflects the uncertainty in experts' opinions on those conditional probabilities. Is it right to model our potential interventions as influencing the resolution of particular p(X|Y) rolls, or as influencing the distribution of p(X|Y) at a particular stage?

I claim it's possible to argue either side.

Maybe a question like "p(much harder to build aligned than misaligned AGI | strong incentives to build AGI systems)" (the second survey question) is split between a quarter of the experts saying ~0% and three-quarters of the experts saying ~100%. (This extremizes the example, to sharpen the hypothetical analysis.) We interpret this as saying there's a one-quarter chance we're ~perfectly safe and a three-quarters chance that it's hopeless to develop and aligned AGI instead of a misaligned one.

If we interpret that as if God will roll a die and put us in the "much harder" world with three-quarters probability and the "not much harder" world with one-quarters probability, then maybe our work to increase the we get an aligned AGI is low-value, because it's unlikely to move either the ~0% or ~100% much lower (and we can't change the die). If this was the only stage, then maybe all of working on AGI risk is worthless.

But "three-quarter chance it's hopeless" is also consistent with a scenario where there's a three-quarters chance that AGI development will be available to anyone, and many low-resourced actors will not have alignment teams and find it ~impossible to develop with alignment, but a one-quarter chance that AGI development will be available only to well-resourced actors, who will find it trivial to add on an alignment team and develop alignment. But then working on AGI risk might not be worthless, since we can work on increasing the chance that AGI development is only available to actors with alignment teams.

I claim that it isn't clear, from the survey results, whether the distribution of experts' probabilities for each step reflect something more like the God-rolls-a-die model, or different opinions about the default path of a thing we can intervene on. And if that's not clear, then it's not clear what to do with the distribution-over-probabilities from the main results. Probably they're a step forward in our collective understanding, but I don't think you can conclude from the high chances of low risk that there's a low value to working on risk mitigation.

I agree that geomean-of-odds performs better than geomean-of-probs!

I still think it has issues for converting your beliefs to actions, but I collected that discussion under a cousin comment here: https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future?commentId=9LxG3WDa4QkLhT36r

An explicit case where I think it's important to arithmean over your subjective distribution of beliefs:

  • coin A is fair
  • coin B is either 2% heads or 98% heads, you don't know
  • you lose if either comes up tails.

So your p(win) is "either 1% or 49%".

I claim the FF should push the button that pays us $80 if win, -$20 if lose, and in general make action decisions consistent with a point estimate of 25%. (I'm ignoring here the opportunity to seek value of information, which could be significant!).

It's important not to use geomean-of-odds to produce your actions in this scenario; that gives you ~9.85%, and would imply you should avoid the +$80;-$20 button, which I claim is the wrong choice.

Load more