[1. Do you think many major insights from longtermist macrostrategy or global priorities research have been found since 2015?]
I think "major insights" is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like "major insights" in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn't expect future progress to take the form of "major insights" that wildly swing views about a basic, high-level question as much (although I still think that's possible).
[2. If so, what would you say are some of the main ones?]
Since 2015, I think we've seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and "hinge of history" vs "patient" long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
[3. Do you think the progress has been at a good pace (however you want to interpret that)?]
I'm not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I'm very uncertain about the degree of progress I "should have expected" on priors.
[4. Do you think that this pushes for or against allocating more resources (labour, money, etc.) towards that type of work?]
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
[5. Do you think that this suggests we should change how we do this work, or emphasise some types of it more?]
Related to the above, I'd love for the work to become better-scoped over time -- this is one thing we prioritize highly at Open Phil.
The ideas behind patient altruism have received substantial discussion in academia:
But this literature doesn't seem well-known among EAs. I personally didn't know about any of it until Phil Trammell cited some of it in his paper on patient philanthropy. Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.
This has been much discussed from before the beginning of EA, Robin Hanson being a particularly devoted proponent.
The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.
In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )
"The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save."
That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn't one. That's one reason, in addition to your novel contributions, I'm so happy about your work! GPI also has a big hopper of projects adding a lot of value by further developing and explicating ideas that are not radically novel so that they have more impact and get more improvement and critical feedback.
If you would like to do further recorded discussions about your research, I'm happy to do so anytime.
That post just makes the claim that "all we really need are positive interest rates". My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.
Hanson's post then says something which sounds kind of like my point, namely that we can infer that it's better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.
Could you elaborate?
I liked this answer.
One thing I'd add: My guess is that part of why Max asked about novel insights is that he's wondering what the marginal value of longtermist macrostrategy or global priorities research has been since 2015, as one input into predictions about the marginal value of more such research. Or at least, that's a big part of why I find this question interesting.
So another interesting question is what is required for us to have "many smaller insights" and "the refinement and diffusion of ideas that aren’t strictly speaking novel"? E.g., does that require orgs like FHI and CLR? Or could we do that without paid full-time researchers, just via a bunch of people blogging in their spare time?
I don't know about generating many smaller insights or refining ideas. But I'd guess that mere "diffusion" probably doesn't require full-time researchers, just good and well-respected communicators.
But I'd also guess that there's another thing that happened: Active critique and screening of a large set of potentially important insights, to identify those that are actually important and correct (or sufficiently likely to be correct to warrant major shifts in decisions). And that proc
... (read more)I think that's a very interesting question, and one I've sometimes wondered about.
Oversimplifying a bit, my answer is: We need neither just bloggers nor just orgs like FHI and CLR. Instead, we need to move from a model where epistemic progress is achieved by individuals to one where it is achieved by a system characterized by a diversification of epistemic tasks, specialization, and division of labor. (So in many ways I think: we need to become more like academia.)
Very roughly, it seems to me that early intellectual progress in EA often happened via distinct and actionable insights found by individuals. E.g. "AI alignment is super important" or "donating to the best as opposed to typical charities is really important" or "current charity evaluators don't help with finding impactful charities... (read more)
[Off the top of my head. I don't feel like my thoughts on this are very developed, so I'd probably say different things after thinking about it for 1-10 more hours.]
[ETA: On a second reading, I think some of the claims below are unhelpfully flippant and, depending on how one reads them, uncharitable. I don't want to spend the significant time required for editing, but want to flag that I think my dispassionate views are not super well represented below.]
Things that immediately come to mind, not necessarily the most important levers:
- Identify skills or bodies of knowledge that seem relevant for longtermist research, and where necessary design curricula for deliberate practice of these. In addition to having other downsides, I think our norms of single-dimensional evaluations of people (I feel like I hear much more often that someone is "promising" or "impressive" than that they're "good at <ability or skill>") are evidence of a harmful laziness that helps entrench the status quo.
- Possibly something like a doubl
... (read more)Several reasons:
- In many cases, doing thorough work on a narrow question and providing immediately impactful findings is simply too hard. This used to work well in the early days of EA when more low-hanging fruit was available, but rarely works any more.
- Instead of having 10 shallow takes on immediately actionable question X, I'd rather have 10 thorough takes on different subquestions Y_1, ..., Y_10, even if it's not immediately obvious how exactly they help with answering X (there should be some plausible relation, however). Maybe I expect 8 of these 10 takes to be useless, but unlike adding more shallow takes on X the thorough takes on the 2 remaining subquestions enable incremental and distributed intellectual progress:
- It may allow us to identify new subquestions we weren't able to find through doing shallow takes on X.
- Someone else can build on the work, and e.g. do a thorough take on another subquestions that helps illuminate how it relates to X, what else we need to know to use the thorough findings to make progress on Y etc.
- The expected benefit from unknown unknowns is larger. Random examples: the economic historians who assembled data on historic GDP growth pre
... (read more)