PF

Peter Favaloro

244 karmaJoined

Comments
12

Hi Ozzie – Peter Favaloro here; I do grantmaking on technical AI safety at Open Philanthropy. Thanks for this post, I enjoyed it.

I want to react to this quote:
…it seems like OP has provided very mixed messages around AI safety. They've provided surprisingly little funding / support for technical AI safety in the last few years (perhaps 1 full-time grantmaker?)

I agree that over the past year or two our grantmaking in technical AI safety (TAIS) has been too bottlenecked by our grantmaking capacity, which in turn has been bottlenecked in part by our ability to hire technical grantmakers. (Though also, when we've tried to collect information on what opportunities we're missing out on, we’ve been somewhat surprised at how few excellent, shovel-ready TAIS grants we’ve found.)

Over the past few months I’ve been setting up a new TAIS grantmaking team, to supplement Ajeya’s grantmaking. We’ve hired some great junior grantmakers and expect to publish an open call for applications in the next few months. After that we’ll likely try to hire more grantmakers. So stay tuned!

FWIW I don't think these are nitpicks -- I think they point to a totally different takeaway than Mathias suggests in his (excellent) post. If there are political reforms that the various (smart, altruistically-motivated, bias-aware) camps can agree on, it seems like they should work on those instead of retreating to totally uncontroversial RCT-based interventions. Especially since the set of interventions that can be tested in RCTs doesn't include the interventions that either group thinks are most impactful. 

More to the point: it seems like both the camps Mathias describes, the EA libertarians and the Effective Samaritans, would agree that their potential influence over how political economy develops over time has much higher stakes (from a cosmopolitan moral perspective) than their potential influence over the sorts of interventions that are amenable to RCTs. It seems far from obvious that they should do the lower-stakes thing, instead of trying to find some truth-tracking approach to work on the higher-stakes thing. (E.g. only pursue the reforms that both camps want; or cooperate to build institutions/contexts that let both camps compete in the marketplace of ideas in a way that both sides expect to be truth-tracking, or just compete in the existing marketplace of ideas and hope the result is truth-tracking, etc.) 

Similarly, it seems like AI accelerationists and AI decelerationists would both agree that their potential influence over how AI plays out has much higher stakes (from a cosmopolitan moral perspective) than their potential influence over the sorts of interventions that are amenable to RCTs. So it's far from obvious that it would be better for them to do the lower-stakes thing instead of trying to find some truth-tracking approach to do the higher-stakes thing.

TBC I think Mathias' post is excellent. I myself work partly on GHW causes, for mostly the reasons he gestures at here. Still, I wanted to spell out the opposing case as I see it.

(In case it's useful to either Simon or Michael: I argue in favor of both these points in my comment on this post.)

Thanks for this post, Phil! I work on some related issues at Open Philanthropy, and we’re always grateful for thoughtful engagement like this. What follows here are my own views, not necessarily Open Philanthropy’s.

Overall, I agree with your skepticism of arguments that use simple models to make the case for levered investments. But I disagree that we should model our philanthropic opportunity set as having steeper-than-log diminishing returns, for 2 reasons: one is about aggregating across sub-causes, and another is about the timeline over which we should think about fluctuations in non-EA spending.

I made similar comments on your draft of this piece, and you adapted the writeup in a very reasonable way. But I think there’s still daylight between your model and mine, which is why I’m repeating myself here. Thanks for the opportunity to comment on the draft, and for considering these critiques as carefully as you have.


Aggregating across sub-causes

As you note in appendix C, the spending/impact curvature of the overall philanthropic utility function can be much flatter than the curvature of any specific cause. (I can’t tell how much that consideration flows through to your overall recommendations. I think it should affect your bottom line a lot.) It’s important to note that curvature gets flatter as you aggregate across sub-causes to the cause level, and not just across causes to the level of the philanthropic utility function – and sub-causes can be a lot narrower than you might expect. 


For instance, imagine that GiveDirectly recipient households each have a money/utility curvature parameter (“eta”) of 1, but they vary in their baseline level of consumption, and in the costs of getting money to them. Further, assume GiveDirectly is good at targeting its dollars to most efficiently generate utility. So the first million goes to a very poor region with low transaction costs, then the next million is spread across (a) the previously mentioned region plus (b) the next-poorest region with low transactions costs. And so on. If you model this out, you'll find that the curvature of GiveDirectly's overall opportunity set can have an eta of much less than 1, depending on your assumptions. 

 

The same should be true of other causes. GiveWell's malaria funding targets the most cost-effective regions first, then their second million is spread across (a) improving administration/frequency/etc in the same community as the first million, and also (b) expanding to slightly less cost-effective regions. And they face an inefficient baseline, since (a) countries vary in their level of development and in how well they provide basic healthcare, and (b) other funders (like the Global Fund) don't allocate their money efficiently due to their own constraints. So whatever the spending-impact curvature might be of fighting malaria in one community (and for the record, I don't see why it should be anchored to 1.5 or even 1), you should expect the overall malaria opportunity set to have flatter curvature than that. (And of course you should expect the overall philanthropic utility function to be even flatter, since it aggregates across causes.) 
 

Time horizon for changes in non-EA spending

I think you’re right to note the distinction between an “all else equal” philanthropic utility function, which holds constant some assumed level of spending by other actors, and an adjusted philanthropic utility function which considers how surprises in others’ spending are correlated with surprises in the resources available to us. I might call these the “conditional” vs “marginal” utility functions, analogous to how we can refer to “conditional” vs “marginal” probability distributions.

However, I don’t think this adjustment will make a huge difference, since a lot of the non-EA spending doesn’t seem especially exposed to asset-market fluctuations in a year-to-year way. I think you might be right over long timelines — for any assets that are intended to fund philanthropy many decades from now, this adjustment could make an important difference to investment decisions. But I think that should be a small share of EA assets.

 

Here are some non-EA resources that go toward meeting the needs of the rural poor in LMICs: LMIC government spending; foreign aid; the output of subsistence agriculture. (Subsistence agriculture is ~50% of calories in rural Africa.) I don't think any of those are nearly as exposed to yearly fluctuations in global asset markets as a 60/40 portfolio would be. If this non-EA set of resources is taking less risk than is justified by the curvature of the utility function then shouldn't EAs take more risk to balance that out? (I think the answer to that question is no, because I don’t trust models like these to advise us on how much risk to take.)

 

Over a timeline of decades rather than years, I agree that the non-EA resources available to the global poor are probably correlated with EA assets. See e.g. Table 6 of Ravallion and Chen 1997. (Though note that individual LMICs will experience growth rates that aren’t perfectly correlated with EA assets – for example, US equity markets did better in the 80s and 90s than 70s and 2000s, but LMIC growth was higher in the 70s and 2000s. Also note that EA’s philanthropic opportunities aren’t perfectly correlated with LMIC poverty – see for example the 2019 paper “Most of Africa's nutritionally deprived women and children are not found in poor households”, not to mention farm animal welfare or longtermism.) But I don’t think that fact is very action-relevant to us, since most of EA’s current assets are targeted toward spending in the next few decades. 


 To see this, consider: If you planned for your endowment to stick around perpetually, you could spend at the pace of asset growth and keep a constant level of assets. For example, if you expected 7% asset returns, you’d spend roughly 7% annually. That means you’d expect that by 2043 you’ve spent 75% of your 2023 assets, but the remaining 25% have grown enough to keep your overall asset value constant. (25% * 1.07^20 ≈ 100%)


So from the perspective of 2023, maybe 25% of your assets are “allocated” to spending that is multiple decades from now. Therefore, maybe 25% of your assets should follow your advice here, and pursue something like a 60/40 portfolio. That’s if you plan for your endowment to exist perpetually (in contrast, Cari Tuna and Dustin Moskovitz want to spend down their assets within their lifetimes), and if you buy the rest of your arguments about e.g. utility curvatures.
 

Bottom line

Overall, I end up skeptical of claims that we should act as if our philanthropic opportunity set has steeper-than-logarithmic diminishing marginal returns, for both empirical and theoretical reasons – though I agree with your general conservatism about investment risk, and your deference to the wisdom of longstanding practice in portfolio management. 

On empirical grounds: OP has tried to estimate empirically the spending/impact curvature of a big philanthropic opportunity set – the GiveWell top charities – and ended up with an eta parameter of roughly 0.38. (I.e. each 10% increase in annual spending means marginal cost-effectiveness decreases by 3.8%.) Your main critique of that estimate seems to be that it’s “conditional” rather than “marginal” as defined above – but I think it’s very unlikely that your proposed adjustment will bring it from 0.38 to something above 1, for the reasons I gave in the "time horizon" section of this comment.

And then on theoretical grounds: even if each household’s utility function has steeper-than-log diminishing returns, by the time you aggregate across households to the sub-cause level, and then aggregate across sub-causes to the cause level, and then aggregate across causes to the level of the overall philanthropic opportunity set, you’ll end up with a much shallower opportunity set in aggregate; as I argued in the "aggregating across sub-causes" section of this comment.

The reason sophisticated entities like e.g. hedge funds hold bonds isn't so they can collect a cash flow 10 years from now. It's because they think bond prices will go up tomorrow, or next year. 

The big entities that hold bonds for the future cash flows are e.g. pension funds. It would be very surprising and (I think) borderline illegal if the pension funds ever started reasoning, "I guess I don't need to worry about cash flows after 2045, since the world will probably end before then. So I'll just hold shorter-term assets."

I think this adds up to, no big investors can directly profit from the final outcome here. Though as everyone seems to agree, anyone could profit by being short bonds (or underweight bonds) while the market started to price in substantial probability of AGI.

Thanks a ton for this clarification! Very helpful.

Thanks for this! I oversee the Macroeconomic Stabilization grant portfolio at Open Phil. We’ve been reevaluating this issue area in light of the current macroeconomic conditions and policy landscape, and we’re planning to write more about our own perspective on this in the future -- so I won’t reply line by line here. But we're always eager for substantive external critique, so I wanted to flag that we'd seen this and appreciate you sharing!

One clarifying question: are you suggesting that we could reduce inflation risks without running higher unemployment in expectation? From my perspective, a dovish macroeconomic policy framework has both costs and benefits. We're generally going to face too-high inflation at one part of the business cycle and too-high unemployment at another part of the cycle. A philanthropist could push for a more dovish framework in order to minimize the UE overshoot at the expense of risking more inflation overshoot. I see those two risks as trading off against each other -- curious if you agree.

Thanks for this! I oversee the Macroeconomic Stabilization grant portfolio at Open Phil. We very much appreciate the thoughtful critique, and the reactions here. We don't do detailed reactions by default, but wanted to flag that we'd seen and appreciate you sharing. 


The risks you describe here are certainly worth considering, and we've tried to consider them whenever we make grants in this area. Historically, we didn't think they outweighed the benefits of more expansionary macro policy. But we've been reevaluating this issue area in light of the current macroeconomic conditions and policy landscape -- we might have more to say on that in the coming months.

If I were you, I'd try to reach out to people like the former Tory whip you quoted, and say, "We've got some money and some energized people, what are the other ingredients to make a difference on this? Who should we talk to, how do we plug into some existing infrastructure, etc."