Executive Director @ Berkeley Existential Risk Initiative
1044 karmaJoined Aug 2019Working (6-15 years)New York, NY, USA


  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours


Topic contributions

Good reasoning, well written. Reading this post convinced me to join the next NYC protest. Unfortunately I missed the one literally two days ago because I waited too long to read this. But I plan to be there in September.


One thing I think is often missing from these sorts of conversations is that "alignment with EA" and "alignment with my organization's mission" are not the same thing! It's a mistake to assume that the only people who understand and believe in your organization’s mission are members of the effective altruism community. EA ideas don’t have to come in a complete package. People can believe that one organization’s mission is really valuable and important, for different reasons, coming from totally different values, and without also believing that a bunch of other EA organizations are similarly valuable.

For "core EA" orgs like the Centre for Effective Altruism[1], there's probably near-total overlap between these two things. But for lots of other organizations the overlap is only incidental, and what you should really be looking for is "alignment with my organization's mission". Perceived EA Alignment is an unpredictable measure of that, while also being correlated with a bunch of other things like culture, thinking style, network, and socioeconomic status, each of which you either don't care about or which you don't want to be selecting for in the first place.

  1. ^

Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?

Which of these is the correct analogy?

  1. "Biology is to science as AI safety is to x-risk," or 
  2. "Immunology is to biology as AI safety is to x-risk"

EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).

The "existential risk studies" model (popular with CSER, SERI, and lots of other non-EA academics) seems to think that analogy 2 is correct, and that interdisciplinary work is totally critical—immunologists alone cannot achieve a useful understanding of the entire system they're trying to study, and they need to exchange ideas with other subfields of medicine/biology in order to have an impact, i.e. AI x-risk workers are missing critical pieces of the puzzle when they neglect broader x-risk studies.

I agree with your last sentence, and I think in some versions of this it's the vast majority of people. A lot of charity advertising seems to encourage a false sense of confidence, e.g. "Feed this child for $1," or "adopt this manatee". I think this makes use of a near-universal human bias which probably has a name but which I am not recalling at the moment. For a less deceptive version of this, note how much effort AMF and GiveDirectly seem to have put in into tracking the concrete impact of your specific donation.


Building off of Jason's comment: Another way to express this is that comparing directly to the $5,500 Givewell bar is only fair for risk-neutral donors (I think?). Most potential donors are not really risk neutral, and would rather spend $5,001 to definitely save one life than $5,000 to have a 10% chance of saving 10 lives. Risk neutrality is a totally defensible position, but so is non-neutrality. It's good to have the option of paying a "premium" for a higher confidence (but lower risk-neutral EV).

Leaving math mode...I love this post. It made me emotional and also made me think, and it feels like a really central example of what EA should be about. I'm very impressed by your resolve here in following through with this plan, and I'm really glad to have people like you in this community.

Very nice post. "Anarchists have no idols" strikes me as very similar to the popular anarchist slogan, "No gods, no masters." Perhaps the person who said it to you was riffing on that?

I think a simpler explanation for his bizarre actions is that he is probably the most stressed-out person on the face of the earth right now. Or he's not seeing the situation clearly, or some combination of the two. Also probably sleep-deprived, struggling to get good advice from people around him, etc.

(This is not meant to excuse any of his actions or words, I think he's 100% responsible for everything he says and does.)

This sort of falls under the second category, "Grantees who received funds, but want to set them aside to return to creditors or depositors." At least that's how I read it, though the more I think about it the more this category is kind of confusing and your wording seems more direct.

Thanks for the clarification. I agree that the FTX problems are clearly related to crypto being such a new unregulated area, and I was wrong to try to downplay that causal link.

I don't think anonymized donations would help mitigate conflicts of interest. In fact I think it would encourage COIs, since donors could directly buy influence without anyone knowing they were doing so. Currently one of our only tools for identifying otherwise-undisclosed COIs is looking at flows of money. If billionaire A donates to org B, we have a norm that org B shouldn't do stuff that directly helps billionaire A. If that donation was anonymous, we wouldn't know that that was a situation in which the norm applied.

There are some benefits of some level of anonymity in donations. For example, I dislike the practice of  universities putting a donor's name on a building in exchange for a large donation. Seems like an impressive level of hubris. I have more respect for donors who don't aggressively publicize their name in this way. However, I do think that these donations should still be available in public records. Donation anonymousness ranges from "put my name on the building" at one extreme to "actively obscure the source of the donation" at the other.

I have more thoughts on donor transparency but I'll leave it there for now.

Downvoted because I think this is too harsh and accusatory:

I cannot believe that some of you delete your posts simply because it ends up being downvoted.

Also because I disagree in the following ways:

  • Donating anonymously seems precisely opposed to transparency. At the very least, I don't think it's obvious that donor anonymity works towards the values you're expressing in your post. Personally I think being transparent about who is donating to what organizations is pretty important for transparency, and I think this is a common view.
  • I don't think FTX's mistakes are particularly unique to crypto, but rather just normal financial chicanery.
  • "if the only way we aggregate how "good" red-teaming is is by up-votes, that is flawed"
    • IIRC the red-teaming contest did not explicitly consider up-votes in their process for granting awards, and the correlation between upvotes and prize-winners was weak.
  • "What makes EA, EA, what makes EA antifragile, is its ruthless transparency."
    • For better or for worse, I don't think ruthless transparency is a focus or a strength of EA. I agree with your sentence right after that, but I don't think that's much related to transparency.
Load more