I do operations / recruiting / AIS field building work at Redwood Research.
Thanks for writing this! The fact that it highlights a premise in EA ("some ways of doing good are much better than others") that a lot of people (myself included) take without very careful consideration makes me happy that it's been written.
Having said that, I am not sure that I believe this more generally because of the reasoning that you give: “well if it’s true even there [in global health] where we can measure carefully, it’s probably more true in the general case”. I think this is part of my belief, but the other part is that just directly comparing the naive expected value of interventions in different cause areas makes this seem true.
For example, under some views of comparing animal welfare to humans, it seems far more impactful to donate to cage-free hen corporate outreach campaigns which per dollar, affects between 9 and 120 years of chicken life, compared to AMF. Further, my impression is that considering the expected value of longtermist interventions also would represent quite a large difference.
This is partially why I try to advocate for members of my group to develop their own cause-prioritization.
Hi Tom! Thanks for writing this post. Just curious... would you consider donating to cost-effective climate charities? (e.g. Effective Environmentalism recommended ones) Seems like it could look better from an optics point of view and fit more with longtermism, depending on your views.
This makes a lot of sense and thanks for sharing that post! It's certainly true that my role is to help individuals and as such it's important to recognize their individuality and other priorities.
I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines' response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization.
Hi Misha. Thanks for your answer. I was wondering why you believe top EA cause areas to not be capable of utilizing people with a wide range of backgrounds and preferences. It seems to me like many of the top causes require various backgrounds. For example, reducing existential risk seems to require people in academia doing research, in policy enacting insights, in the media raising concerns, in tech building solutions, etc.
Hello! I'm here because of my interest in moral philosophy and global priorities research. If anyone is aware of one, I'd be curious to read a history of bioethics and its impact on research.