C

CEvans

Strategy Director @ EA Oxford
252 karmaJoined Working (0-5 years)

Comments
24

Hmm I'd have thought that most EA orgs pay significantly better than the rest of the charity sector, and are competitive with mid-high paying private sector roles? 

I'm pretty confident this is true at a junior level, but is perhaps less so for more senior roles.

I downvoted this forum post because I think the quoted part of the text, while obviously informal, is an annoying strawman of criticisms EA faced and represents an attitude towards critique that I think is quite counterproductive. I think the rest of the linked post is significantly better though, and agree with the general point. 

Thanks a lot for posting this! I really enjoyed reading it and the linked google document - would anyone in the EA Philippines team be interested in a short meeting with me about this? I currently run EA Oxford and have some specific questions.

Thanks for the thoughtful comment Amber! I appreciate the honesty in saying both that you think people should think more about prioritisation and that you haven't always yourself. I have definitely been like this at times and I think it is good/important to be able to say both statements together. I would be happy/interested to talk through your thinking about prioritisation if you wanted. I have some other accounts of people finding me helpful to talk to about that kind of thing as it happens frequently in my community building work.  

Re. (1), I agree that not everyone can be in the heavy tail of the community distribution, but I don't think there's strong reasons to think that people can't reach their "personal heavy tail" of their career options as per the graph. Ie. they might not all be able to have exceptional impact on a scale relative to the world/EA population, but they can have exceptional impact relative to different counterfactuals of them, and I think that is something still worth striving for. 

For (1) and (2), I guess my model of the job market/impact opportunities is less static than I think your phrasing suggests you think about it. I don't think I conceive of impact opportunities as being a fixed number of "impactful" jobs at EA orgs that we need to fill, and I think you often don't need to be super "entrepreneurial" per your words to look beyond this. Perhaps ironically, I think your work is a great example of this (from what I understand). You use your particular writing skills to help other EAs in a way that could plausibly be very impactful, and this isn't necessarily a niche that would have been filled if you hadn't taken it. It seems like there are also lots of other career paths (eg. journalism, politics, earn to give etc) which have impact potential probably higher for many people than typical EA orgs, but aren't necessarily represented in viewing things the way I perceived you to be. Of course there are also different "levels" of being entrepreneurial too which mean you aren't really directly substituting for someone else even if you aren't founding your own organisation (such as deciding on a new research agenda, taking a team in a new direction etc). 

I think you might have already captured a lot of this with your "failure of imagination..." sentence, but I do think that what I am saying implies that people are capable of finding their path such that they can reach their impact potential. Perhaps some people will be the very best for particular "EA org" jobs, but that doesn't mean others can't make very impactful career paths for themselves. I agree that in some cases this might look like contributing to the EA ecosystem and using particular skills to be a multiplier on others doing work you think is really important, but I don't think it is a binary between this and working in a key role at an "EA org". 
 

Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases. To be clear, this is not intended as a personal comment - I have no context on you besides this post.

I am excited about having better community mediation though. Perhaps you coordinating a group/arrangement with external people could be a great idea.

Also I think this kind of post about personal career plans with detailed considerations is great so thanks for writing it.

Thanks David that all makes sense. Perhaps my comment was poorly phrased but I didn't mean to argue for caring about infohazards per se, but was curious for opinions on it as a consideration (mainly poking to build my/others'understanding of the space ). I agree that imposing ignorance on affected groups is bad by default.

Do you think the point I made below in this thread regarding pressure from third party states is important? Your point "it doesn't matter to them whether it also devastates agriculture in Africa or Australia" doesn't seem obviously true at least considering indirect effects. Presumably, it would matter a lot to Australia/African countries/most third party states, and they might apply relevant political pressure. It doesn't seem obvious that this would be strategically irrelevant in most nuclear scenarios.

Even if there is some increased risk, I feel it is a confusing question about how this trades off with being honest/having academic integrity. Perhaps the outside view (in almost all other contexts I can think of, researchers being honest with governments seems good -perhaps the more relevant class is military related research which feels less obvious) dominates here enough to follow the general principles.

Thanks for the reply and link to the study - I feel quite surprised by how minor the effect of impact awareness is but I suppose nuclear war feels quite salient for most people. I wonder if this could be some kind of metric used for evaluating the baseline awareness of a danger (ie. I would be very interested to see the same study applied to pandemics, AI, animals etc)

Re. The effects on government decision making, I think I agree intuitively that governments are sufficiently scope insensitive (and self interested in nuclear war circumstances?) that it would not make a big difference necessarily to their own view.

However, it seems plausible to me that a global meme of "any large-scale nuclear war might kill billions globally" might mean that there is far greater pressure from third party states to avoid a full nuclear exchange. I might try thinking more about this and write something up, but it does seem like having that situation could make a country far less likely to use them.

Obviously nuclear exchanges are not ideal for third parties even with no climate effect, and I feel unsure how much of a difference this might make. It also doesn't seem like the meme is currently sufficiently strong as to affect government stances on nuclear war, although that is a reasonably uninformed perspective.

Thanks for writing this - it seems very relevant for thinking about prioritization and more complex X-risk scenarios.

I haven't engaged enough to have a particular object-level take, but was wondering if you /others had a take on whether we should consider this kind of conclusion somewhat infohazardous? Ie. Should we be making this research public if it at all increases the chance that nuclear war happens?

This feels like a messy thing to engage with, and I suppose it depends on beliefs around honesty and trust in governments to make the right call with fuller information (of course there might be some situations where initating a nuclear war is good).

Thanks for writing this post Victor, I think your context section represents a really good and truth-seeking attitude coming into this with. From my perspective, it is also always good to have good critiques of key EA ideas.  To respond to your points:

1 and 2. I agree that the messaging about maximisation has the danger of people taking it too far, but I think it is quite defensible as an anchor point. Maybe this should be more present in the handbook, but I think it is worth initially saying that  >95% of EAs' lives don't look like some extreme naive optimiser per your framing. 

I think I see EA more as "how can we do the most amount of good you can do with X resources", where it is up to you to determine X in terms of your time, money, career etc. When phrases begin with "EAs should", I generally interpret that as "If you are wanting to have more impact, then you should". I think the moral demandingness aspect is actually not very present in most EA discourse, and this is likely best for ensuring a healthy community. 

 EAs are of course human too, and the community from what I have seen of it is generally very supportive of people making decisions that are right for themselves when necessary (eg. career breaks, quitting a job which was very impactful, changing jobs to have kids etc - an example (read the comments)). Even if you are a "hard-core utilitarian", then I think placing some value on your own happiness, motivation etc is still good for helping you achieve the best you can. Most EAs live on quite healthy salaries, in nice work environments, with a supportive community - while I don't deny that there are also mental health issues within the group, I think EA as a movement thus far hasn't caused many people to be self-sacrificial to the point of being detrimental to their wellbeing.

On whether maximisation is a good goal in the first place; the current societal default in most cases of altruistic work is to not consider optimisation or effectiveness at all. This has led to huge amounts of wasted time and money, which has by extension allowed massive amounts of suffering to continue. While you're subpoint 5 about uncertainty is true, I think EA successes have proved the the ability to increase the expected impact you have with careful thought and evidence, hence the value EA has placed on rationality. Of course people make mistakes and some projects aren't successful or even might be net negative, but I think it is reasonable to say that the expected value of your actions is what is important. If you buy that the effectiveness of interventions is roughly heavy-tailed, then you should also expect that the best options are much better than the "good" ones, and so it is worth taking a maximisation mindset to get the most value.

I don't think saying "the world is a bad place" is a very useful or meaningful claim to make, but I think it is true that there is just so much low-hanging fruit still on the table for making it so much better, and that this is worth drawing attention to. People say things like the world is bad(which could be done in a better way) because honestly a lot of the world just doesn't care about massive issues like poverty, factory farming, or threats from eg. pandemics or AI, and I think it is somewhat important to draw attention to the status quo being a bit messed up.

3. Ah your initial point is a classic argument that I think targets something no EA actually endorses. I think moral uncertainty and ideas of worldview diversification are highly regarded in EA, and I think everyone would immediately disregard acts that cause huge suffering today in the hope of increasing future potential, for both moral and epistemic uncertainty reasons.

I think your points regarding the insignificance of today's events for humanity's long-term seem to rely heavily on a view of non path dependency - my guess is that how the next couple of centuries go on key issues like AI, international coordination norms, factory farming, and space governance, could all significantly affect the long-term expected value of the future. I think ideas of hinginess are good to think about for this, see here:  Hinge of history - EA Forum (effectivealtruism.org)

4. I agree it is generally a confusing topic and don't have anything particularly useful to say besides wanting to highlight that people in the community are also very unsure. Fwiw I think most S-risk scenarios people are worried about are more to do with digital suffering/astronomical scale factory farming. I think human-slavery type situations are also quite unlikely. 

 

Thanks for writing this, I found it helpful for understanding the biosecurity space better!

I wanted to ask if you had advice for handling the issue around difficulties for biosecurity in cause prioritisation as a community builder.

I think it is easy to build an intuitive case for biohazards not being very important or an existential risk, and this is often done by my group members (even good fits for biosecurity like biologists and engineers), who then dismiss the area in favour of other things. They (and me) do not have access to the threat models which people in biosecurity are actually worried about, making it extremely difficult to evaluate. An example of this kind of thinking is David Thorstad's post on overestimating risks from biohazards which I thought was somewhat disappointing epistemically: https://ineffectivealtruismblog.com/2023/07/08/exaggerating-the-risks-part-9-biorisk-grounds-for-doubt/.

I suppose the options for managing this situation are:

  1. Encourage deference to the field that biosecurity is worth working on relative to other EA areas.

  2. Create some kind of resource which isn't an infohazard in itself, but would be able to make a good case of biosecurity's importance by perhaps gesturing at some credible threat models.

  3. Permit the status quo which seems to probably lead to an underprioritisation of biosecurity.

2 seems best if it is at all feasible, but am unsure what to do between 1 and 3.

Load more