A

Arepo

4955 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
689

Topic contributions
17

Fwiw I think total hedonic utilitarianism is 'ultimately correct' (inasmuch as that statement means anything), but nonetheless strongly agree with everything else you say.

I think animal welfare work is underrated from a long-term perspective.

Fwiw I don't disagree that , and should have put it on my list. I would nonetheless guess it's lower EV than global health. 

What is the argument for Health and development interventions being best from a long-term perspective? 

That's a pretty large question, since I have to defend it against all alternatives (and per my previous comment I would guess some subset of GCR risk reduction work is better overall) But some views that make me think it could at least be competitive:

  • I am highly sceptical of both the historical track record and, relatedly, the incentives/(lack of) feedback loops in longtermist-focused work in improving the far future
    • I find the classic 'beware surprising convergence' class of argument for why we should try to optimise directly for longtermism is unconvincing theoretically, since it ignores the greater chance of finding the best longtermist-affecting neartermist intervention due to the tighter neartermist feedback loops
    • I think per my discussion here that prioritising events according to their probability of wiping out the last human is a potentially major miscalculation of long term expectation
  • the main mechanism you describe having longtermist value is somewhat present in GHD (expanding moral circle)
    • It just being much less controversial (and relatedly, less-based on somewhat subjective moral weight judgements) means it's an easier norm to spread - so while it might not expand the moral circle as much in the long term, it probably expands it faster in the short term (and we can always switch to something more ambitious when the low hanging moral-circle-fruit are picked)
  • related to lack of controversy, it is much more amenable to empirical study than either longtermist or animal welfare work (the latter having active antagonists who try to hide information and prevent key interventions)
  • I find the economic arguments for animal welfare moral circle expansion naturally coming from in vitro meat compelling. I don't think historical examples of sort-of-related things not happening are a strong counterargument. I don't see what the incentives would be to factory farm meat in a world where you can grow it far more easily.
    • For the record, I'm not complacent about this and do want animal welfare work to continue. It's just not what I would prioritise on the margin right now (if social concern for nonhuman animals dropped below a certain level I'd change my mind).
    • I am somewhat concerned about S-risk futures, but I think most of the risk comes from largely unrelated scenarios e.g. 1 economic incentives to create something like Hanson's Age of Em world, where the supermajority of the population are pragmatically driven to subsistence living by an intentionally programmed fear of death (not necessarily this exact scenario, but a range like it), e.g. 2 intentionally programmed hellworlds. I'm really unsure about the sign of animal welfare work on the probability of such outcomes
    • I'm not negative-leaning, so think that futures in which we thrive and are generally benign but in which there are small numbers of factory-farm-like experiences can still be much better on net than a future in which we e.g. destroy civilisation, are forever confined to low-medium tech civilisations on Earth, and at any given point either exploit animals in factories or just don't have any control over the biosphere and leave it to take care of itself
  • IIRC John and Hauke's work suggested GHD work is in fact pretty high EV for economic growth but argued that growth-targeting strategies were much higher (the claim which I'm sceptical of)
  • To my knowledge, the EV of economic growth from RCT-derived interventions has been pretty underexplored. I've seen a few rough estimates, but nothing resembling a substantial research program (though I could easily have missed one).

I reviewed the piece you linked and fwiw strongly disagreed that the case it made was as clear cut as the authors conclude (in particular IIRC they observe a limited historical upside from RCT-backed interventions, but didn't seem to account for the far smaller amount of money that had been put into them; they also gave a number of priors that I didn't necessarily strongly disagree with, but seemed like they could be an order of magnitude off in either direction, and the end result was quite sensitive to these).

That's not to say I think global health interventions are clearly better - just that I think the case is open (but also that, given the much smaller global investment in RCTs, there's probably more exploratory value in those).

I could imagine any of the following turning out to be the best safeguard of the long term (and others):
 

  • Health and development interventions
  • Economic growth work
  • Differential focus on interplanetary settlement
  • Preventing ecological collapse
  • AI safety work
  • e/acc (their principles taken seriously, not the memes)
  • AI capabilities work (because of e/acc)
  • Work on any subset of global catastrophes (including seemingly minor ones like Kessler syndrome, which in itself has the potential to destabilise civilisation)

My best guess is the last one, but I'm wary of any blanket dismissal of any subset of the above.

I'm philosophically a longtermist, but suspect better evidenced short termist interventions are comparable to if not much greater than 'direct longtermism' in expectation. 

In the long run I think a thriving human descendant-line with better cooperation norms is going to lead to better total phenomenal states than reduced factory farming will.

Is there a way to live low-cost in Berlin, short of extreme deprivation?

Do you know broadly what getting visas is like?

Interstellar civilization operating on technology indistinguishable from magic

'Indistinguishable from magic' is a huge overbid. No-one's talking about FTL travel. There 's nothing in current physics that prevents us building generation ships given a large enough economy, and a number of options consistent with known physics for propelling them some of which have already been developed, others of which are tangible but not yet in reach, others of which get pretty outlandish.

I don't see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments

Pandemics seem likely to be relatively constant. Biological colonies will have strict atmospheric controls, and might evolve (naturally or artificially) to be too different from each other for a single virus to target them all even if it could spread. Nukes aren't a threat across star systems unless they're accelerated to relativistic speeds (and then the nuclear-ness is pretty much irrelevant).

the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero

I don't know anyone who asserts this. Ord and other longtermists think it's very low, though not because of bunkers or vaccination. I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.

the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday

This is an absurd claim.

Hi Zachary,

First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I haven't had time to reply to). This is, fwiw, is already well above the level of community engagement that I've perceived from most previous heads of CEA.

On your specific comments, it's possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope you'll appreciate that me focusing on the disagreements for the rest of this comment doesn't imply that they're my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. I'm giving my concerns here as much for your and the community's information as with the hope of a further response.

> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself. 

In some sense this is obviously true, but I believe it's gerrymandering what the difference between 'what' and 'how' actually is. 

For example, to my mind 'scout mindset' doesn't seem any more central a goal than 'be transparent'. In the post by Peter you linked, his definition of it sounds remarkably like 'be transparent', to wit: 'the view that we should be open, collaborative, and truth-seeking in our understanding of what to do'. 

One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isn't so hard to imagine if it's understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policies - but I don't think we live in either world most of the time. 

In the actual world, the community doesn't really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the ‘community’ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for 'leadership' forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we don't have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding - a simple weekly office hours policy could fix this.

> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team

Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what I'm pointing at in the previous paragraph would come close to asking them to do that.

> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.

I think this statement was part of the problem... We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim she's wrong on this. As presented, Labenz's statement is in itself a further example of lack of transparency that seems not to serve the community - it's a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.

This raises a more general point - transparency doesn't just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldn't have had the bandwidth to uncover.

> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity

These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesn't deal in biosecurity or international politics - if it does, then the lack of transparency is much worse than I thought! 

> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency

All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than they're willing to offer themselves. Briefly going through their three key arguments:

'Challenge 1: protecting our brand' - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX;  confusion around Wytham Abbey - the fact that that was 'other CEA' notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly 'a key member of our organisation') sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEA's historical media policy of 'talk as little as possible to the media' both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.

'Challenge 2: information about us is information about grantees' - this mostly doesn't apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community they're serving - but in that situation we think it's a good tradeoff)

'Challenge 3: transparency is unusual' - this seems more like a whinge than a real objection. Yes, it's a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we can't hold ourselves to those raised standards, it's hard to have much hope that we'll ever inspire meaningful change in others.

> I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity

This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesn't matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.

> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”

I think this politicianspeak. If AMF said 'our primary goal is having a positive impact on the world rather than distributing bednets' and used that as a rationale to remove their hyperfocus on bednets, I'm confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful - if you later find out that your core competencies aren't that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science's founders did on multiple occasions!). 

> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion ... We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)

I think this along with the transparency question is our biggest disagreement and/or misunderstanding. There's a major equivocation going on here between exactly *which* members of the community you're serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for 'people leading EA community-building'. I want to see CEA support people who actually need support to do good - the low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurial funding; all the talented people who didn't go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.

> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.

I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what I'm already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. There's something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, 'positive career changes' can be gamed, or could just be the wrong thing to focus on. 

> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP. 

If both CEA and its donors are effectiveness-minded, this shouldn't really be a distinction - per my comments about focus above, serving CEA's community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGs - then your donors are the community and hence, either way, your customers.

Suppose we compare two nonprofit orgs doing related work. Let’s use some real examples: Rethink Priorities and Founders Pledge, both of who do global health and climate change research; CEA (who run EAGs) and any community groups who run EAGxes; perhaps CFAR and Khan Academy.

Ideally, in an effectiveness-minded movement, every donation to one of them rather than the other should express some view on the relative capability of that org to execute its priorities - it is essentially a bet that that org will make better use of its money.

We can use a simple combinatorial argument to show that the epistemic value of this view rapidly approaches 0, the more things either or both of those organisations are doing. If AlphaOrg does only project A1, and BetaOrg does only project B1 (and for the sake of simplicity, that both projects are have the same focus) then donating to Alphaorg clearly shows that you think Alphaorg will execute it better - that A1 > B1.

But if Alphaorg adds a single (related or unrelated) project, A2, to their roster, the strength of the signal drops to 1/6th: now in donating to Alphaorg, I might be expressing the view that A> B> A2, that A> B> A1, that A> A2 > B1, or that A> A1 > B1, or (if I think the lesser projects sum to more than the greater one), that B1 >A2 > Aor B1 >A1 > A2

In general, the number of possible preference orderings we can have between just two orgs respectively running m and n projects between them is (mn)![^end] (meaning 3*2=6 for three, 4*6=24 for four, 5*24=120 for five, and so on). If we also have GammaOrg with k projects of its own in the comparison, then we have (mn + k)! possible preference orderings.

Assuming a typical EA org receives money from a couple of hundred donors a year, each of which we might consider a ‘vote’ or endorsement, that means on naive accounting (where we divide votes by preference orderings), as few as 6 projects between two relevant orgs give us less than a single endorsement’s worth of info on which of their projects effectiveness-minded donors actually support.

Obviously there are other considerations. Reduced administrative burden from combining is perhaps the foremost; also major donations can be restricted, somewhat mimicking the effect of donating to a more focused org (though if the org also receives unrestricted donations, it can undo this effect by just redirecting the unrestricted money) ; also one might want a very strong team to expand their focus on priors - though doing so would strongly reduce our confidence that they remain a strong team for the expanded purpose. 

Nonetheless, with the central EA orgs typically having at least 3 or 4 focus areas each (giving ~40320 possible preference orderings between two of them), and more if you count in-house support work - software development, marketing etc - as separate projects, I think the magnitude of this epistemic cost is something a purportedly effectiveness-minded and data-driven movement should consider very seriously.

 

[^end] To be precise, (k + n)! - 1 if they have the same number of projects, since we can exclude the case where you think every single one of Alphaorg’s projects is better than every single one of Betaorg’s.

I'd be curious how much dialogue and agreement there is between him and heads of other Christian denominations about the general importance of impact and the specific decisions made under that rubric.

Load more