SM

Simon_M

1204 karmaJoined

Comments
88

Wow - this is a long post, and it's difficult for me to point out exact which bits I disagree with and which bits I agree with given it's structure. I'm honestly surprised it's so popular.

I also don't really understand the title. "Against much financial risk tolerance".

  1. The conclusions doesn't seem to support that
  2. I'm not sure it makes sense grammatically? "Against too much financial risk tolerance", "Against many arguments for financial risk tolerance"?

Starting with your conclusions:

Justifying a riskier portfolio

  • The “small fish in a big pond + idiosyncratic risk” argument, part 1:
    Proportionally small philanthropists should be more inclined to take investment opportunities with a lot of idiosyncratic risk, like startups.

Disagree - you explain elsewhere, individuals make up the global portfolio, there's no reason "small philanthropists" should behave differently to large philanthropists. If by "take investment opportunities" you mean "join or found" startups this section might make more sense.

  • The “cause variety” argument:
    Cause-neutral philanthropists can expand or contract the range of cause areas they fund in light of how much money they have. This lets marginal utility in spending diminish less quickly as the money scales up.

Agree - I think this is a a strong argument

  • The “mispriced equities” argument:
    Certain philanthropists might develop domain expertise in certain areas which informs them that certain assets are mispriced. [This could push in either direction in principle, but the motivating case to my mind is a belief that the EA community better appreciates the fact that a huge AI boom could be coming soon.]

I don't think this is an especially strong argument

  • The “activist investing” argument, if it’s worth it:
    Stock owners can vote on how a firm is run. Some philanthropists might know enough about the details of some firm or industry that it’s worthwhile for them to buy a lot of stock in that area and participate actively like this—voting against a firm’s bad practices, say—despite the fact that this will probably make their portfolios riskier.

I don't think this is an especially strong argument.

Justifying a more cautious portfolio

  • The “small fish in a big pond + idiosyncratic risk” argument, part 2:
    Proportionally small philanthropists without startups should invest especially cautiously, to dilute the risk that others’ startups bring to the collective portfolio.

I don't think this is super relevant. (Same as the first poin in the "riskier" portfolio section - this isn't a statement about the overall EA portfolio)

  • The “lifecycle” argument:
    The distribution of future funding levels for the causes supported by a given community tends to be high-variance even independently of the financial investments we make today; risky investments only exacerbate this. [I think this is especially true of the EA community.]

I think this argument should actually be in "Directionally ambiguous". The general question is the behaviour of future income streams vs a typically investors income stream makes investing aggressively more or less sensible. Whilst I agree the income stream is more volatile, there's other considerations:

  1. If EA turns out to be a fad - then this is probably bad and we should have wanted to have funded it less
  2. EA is generally still growing, and has grown for a while much faster than a typical investors wage grows

My intuition is these are enough to move this factor into "Justifying a riskier portfolio" but reasonable minds can differ.

  • The “activist investing” argument, if it’s not worth it:
    Activist investing may be more trouble than it’s worth, and the fact that stocks come with voting rights slightly raises stock prices relative to bond prices.

Agreed, although it's not clear why this should make us more risk averse? Isn't this just neutral?

Directionally ambiguous

  • Arguments from the particulars of current top priorities:
    The philanthropic utility function for any given “cause” could exhibit more or less curvature than a typical individual utility function.

I really strongly disagree with this. I don't find any argument convincing that philanthropic utility functions are more curved than typical individuals. (As I've noted above where you've attempted to argue this. This should be in "Justifying a riskier portfolio"

  • The “truncated lower tail” argument:
    The philanthropic utility function’s “worst-case scenario”—the utility level reached if no resources end up put toward things considered valuable at all— might bottom out in a different place from a typical individual utility function’s worst-case scenario.

Agreed - this is a weak argument

  • Arguments from uncertainty:
    Philanthropists may be more uncertain about the relative impacts of their grants than individuals are about how much they enjoy their purchases. This extra uncertainty could flatten out an “ex ante philanthropic utility function”, or curve it further.

I don't see how this could flatten out the utility function. This should be in "Justifying a more cautious portfolio"

  • The “equity premium puzzle” argument:
    The reasons not captured by the Merton model for why people might want to buy a lot of bonds despite a large equity premium could, on balance, apply to philanthropists more or less than they apply to most others.

I believe it's relatively clear that philanthropists should be more willing to accept the equity risk premium, because their utility is far less correlated to equities than typical investors. This is one of the strongest arguments and should be in Justifiying a riskier portfolio.

  • The “mission hedging” argument:
    Scenarios in which risky investments are performing well could tend to be scenarios in which philanthropic resources in some domain are more or less valuable than usual (before accounting for the fact that how well risky investments are performing affects how plentiful philanthropic resources in the domain are).

I agree this is ambigiuous.

My conclusion looks something like:

Arguments in favour of being more aggressive than typical investors:

  • We have a much flatter utility curve for consumption (ignoring world-state) vs individual investors (using GiveWell's #s, or cause variety). [Strong]
  • We have a much lower correlation between our utility and risk asset returns. (Typically equities are correlated with developed market economies and not natural disasters) [Strong]
  • We have a faster growing income than a typical investor (as new people join EA, as EAs on the whole are more successful professionally than typical investors). (And if it turns out not to be true, this is a world we want to be investing less) [Weak]

Arguments against being more aggressive than typical investors:

  • The marginal investment is more uncertain than the marginal consumption for a typical investor, ergo the utility curve might be more curved. [Weak]

Taking all this together, I can't see how this post is justifying having less risk tolerance than typical investors.

To go into more detail on some specific issues with the article:

Introduction argues that the general view in EA is to take more risk than other orgs, which take a roughly "typical" amount of risk. (I disagree, at least on your own terms they seem to be taking more risk - 70/30-80/20 vs 60/40)

We now see the wrinkle in a statement like “twice as much money can buy twice as many bed nets, and save roughly twice as many lives”. When markets are doing unusually well, other funders have unusually much to spend on services for the poor, and the poor probably have more to spend on themselves as well. So it probably gets more costly to increase their wellbeing.

Unfortunately, I’m not aware of a good estimate of the extent to which stock market performance is associated with the cost of lowering mortality risk in particular.[8] I grant that the association is probably weaker than with the cost of “buying wellbeing for the average investor” (weighted by wealth), since the world’s poorest probably get their income from sources less correlated than average with global economic performance, and that this might justify a riskier portfolio for an endowment intended for global poverty relief than the portfolios most individuals (weighted by wealth) adopt for themselves.[9] But note that, whatever association there may be, it can’t be straightforwardly estimated from GiveWell’s periodically updated estimates of the current “cost of saving a life”. Those estimates are based on studies of how well a given intervention has performed in recent years, not live data. They don’t (at least fully) shed light on the possibility that, say, when stock prices are up, then the global economy is doing well, and this means that

  • other philanthropists and governments[10] have more to spend on malaria treatment and prevention;[11]
  • wages in Nairobi are high, this raises the prices of agricultural products from rural Kenya, this in turn leaves rural Kenyans better nourished and less likely to die if they get malaria;

and so on, all of which lowers the marginal value of malaria spending by a bit. A relationship along these lines strikes me as highly plausible even in the short run (though see the caveats in footnote 9). And more importantly, over the longer run, it is certainly plausible that if the global economic growth rate is especially high (low), this will lead both to higher (lower) stock prices and to higher (lower) costs of most cheaply saving a life.[12] But even over several years, noise (both in the estimates of how most cheaply to save a life and in the form of random fluctuations in the actual cost of most cheaply saving life) could mask the association between these trends, since the GiveWell estimates are not frequently updated.

In any event, the point is that it’s not being proportionally small per se that should motivate risk tolerance. The “small fish in a big pond” intuition relies on the assumptions that one is only providing a small fraction of the total funding destined for a given cause and that the other funders’ investment returns will be uncorrelated with one’s own. While the first assumption may often hold, the latter rarely does, at least not fully. There’s no general rule that small fishes in big ponds should typically be especially risk tolerant, since the school as a whole typically faces correlated risk.

I think this argument claims far too much.

1. Correlation between global equity returns and global economic performance is already quite low.

2. Correlation between global equity returns and developing nation economic performance is much lower

3. Correlation between global equity returns and opportunities for donations are much lower. Things like cat bonds exhibit relatively little correlation to global markets. (This assumes you accept the premise that some of the largest problems to befall the developing world are natural distasters).

The really relevant point here, is how strong this correlation is vs a typical investor. We should expect this correlation to be far lower for a philanthropic investor than a self-interested, developed world investor, and therefore it makes little sense as an argument for lower risk aversion.

For illustration:

  • Suppose the optimal collective financial portfolio across philanthropists in some cause area would have been 60% stocks, 40% bonds if stocks and bonds had been the only asset options.
  • Now suppose some of the philanthropists have opportunities to invest in startups. Suppose that, using bonds as a baseline, startup systematic risk is twice as severe as the systematic risk from a standard index fund. What should the collective portfolio be?
  • First, consider the unrealistic case in which—despite systematic risk fully twice as severe for startups as for publicly traded stocks—the expected returns are only twice as high. That is, suppose that whenever stock prices on average rise or fall by 1%, the value of a random portfolio of startups rises or falls by x% where x > 0.6. For illustration, say x = 2. Then a portfolio of 30% startups and 70% bonds would behave like a portfolio of 60% stocks and 40% bonds. Either of these portfolios, or any average of the two, would be optimal.
  • But if startups are (say) twice as systematically risky as stocks (relative to bonds), expected returns (above bonds) should be expected to be more than twice as high. If the expected returns were only twice as high, startup founders would only be being compensated for the systematic risk; but as noted, most founders also need to be compensated for idiosyncratic risk.
  • To the extent that startup returns are boosted by this compensation for idiosyncratic risk, the optimal collective financial portfolio across these philanthropists is then some deviation from 30% startups / 70% bonds in the direction of startups. Everyone without a startup should have 100% of their portfolio in bonds.

This entire paragraph doesn't make sense. 

  1. There's a conflation between startup founders and investors. 
  2. There is not explanation of why idiosyncratic risk should be compensated! Indeed you could replace everything you wrote about startups with "public equities" and you'd have a good argument for public equities having higher returns due to idiosyncratic risk.
  3. There's no argument for why philanthropists should be any different from other startup investors

An EA-aligned endowment held in safe bonds would not have lost value, and so would have done a lot more good now that the marginal utility to “EA spending” is (presumably permanently) higher than it otherwise would have been.

Just to be clear, you are often writing "safe bonds" but talking as if you mean cash or cash equivalents. The 60 / 40 portfolio you are generally benchmarking against in this post typically invests in bonds with a variety of durations. The US treasury market as a whole lost ~13% in 2022, so it definitely would have "lost value".

Suppose a philanthropist’s (or philanthropic community’s) goal is simply to provide for a destitute but otherwise typical household with nothing to spend on themselves. Presumably, the philanthropist should typically be as risk averse as a typical household.[16] Likewise, suppose the goal is to provide for many such households, all of which are identical. The philanthropist should then adopt a portfolio like the combination of the portfolios these households would hold themselves, which would, again, exhibit the typical level of riskiness. This thought suggests that the level of risk aversion we observe among households may be a reasonable baseline for the level of risk aversion that should guide the construction of a philanthropic financial portfolio.

I don't think this is a very relevant model of what most philanthropists are trying to do. I don't think they are trying to help a fixed number of households, they are trying to help a variable number of households as much as possible. This changes the calculus substantially and makes the portfolio much more risk seeking.

Yes, I agree with this - editing the post to make this correction

Tyler Cowen on the effect of AGI on real rates:

In standard models, a big dose of AI boosts productivity, which in turn boosts the return on capital, which then raises real interest rates.

I am less convinced.  For one thing, I believe most of the gains from truly fundamental innovations are not captured by capital.  Was Gutenberg a billionaire?  The more fundamental the innovation, the more the import of the core idea can spread to many things and to many sectors.

Furthermore, over the centuries real rates of return seem to be falling, even though there are some high productivity eras, such as the 1920s, during that time.  The long-run secular trend might overwhelm the temporary productivity blips, I simply do not know.

I do think AI is likely to increase the variance of relative prices.  Observers disagree where the major impacts will be felt, but possibly some prices will fall a great deal — tutoring and medical diagnosis? — and other prices will not.  Furthermore, only some individuals will enjoy those relative price declines, as many may remain skittish about AI for quite a few years, possibly an entire generation.

That heterogeneity and lack of stasis will make it harder to infer real interest rates from observed nominal interest rates.  Converting nominal to real variables is easiest under conditions of relative stasis, but that is exactly what AI is likely to disrupt.  Furthermore, real inflation rates, and thus real interest rates, across different individuals, are likely to increase in their variance.

Overall, that blurring of nominal and real will make the Fed’s job harder.  And it will be harder for Treasury to forecast what will be “forthcoming real interest rates.”

There's a 3rd reason, which I expect is the biggest contributor. Number of readers of the post/comment.

I started writing a comment, but it got too long, so I wrote it up here.

I summarised a little bit how various organisations in the EA space aggregate QALY's over time here.

What I've been unable to find anywhere in the literature is how many QALYs a typical human life equates to? If I save a newborn from dying, is that worth 70 QALYs (~global life expectancy), 50 QALYs (not all of life is lived in good health), or some other value?

I think this post by Open Phil is probably related to what you're asking for and I would also recommend the GiveWell post on the same topic

I think this is still generally seen as a bit of an open question in the space

How do you square:

The order was: I learned about one situation from a third party, then learned the situation described in TIME, then learned of another situation because I asked the woman on a hunch, then learned the last case from Owen.

with

No other women raised complaints about him to me, but I learned (in some cases from him) of a couple of other situations where his interactions with women in EA were questionable. 

Emphasis mine. (Highlighting your first statement implies he informed you of multiple cases and this statement implies he only informed you of one)

Thanks - I've already commented. I'm pretty disappointed that Owen resigned 3 days before my comment and I was filibustered. (I've already commented there about the timeline, very curious to know what can possibly have been going on during that period other than getting together a PR strategy).

Please would someone be able to put together a slightly more fleshed out timeline of who knew what and when. Best I can tell is:

  • 3rd February 2023 - TIME article published
  • 3rd February 2023 - People start questioning this specific case in the forum
  • 3rd February 2023 - Julia and Owen discuss who should find out about this
  • 3rd February 2023 - Julia informed Nicole that the person was Owen
  • 3rdFebruary 2023 - Julia informs EV US and EV UK boards
  • 4th February 2023 - Julia informed Chana that the person was Owen
  • 11th February 2023 - Owen resigns from the board
  • 20th February 2023 - Owen's resignation is made public

I know I'm probably being dense here, but would it be possible for you to share what the other possibilities are?

Edit: I guess there's "The person doesn't have the role, but we are bound by some kind of confidentiality we agreed when removing them from post"

Load more