Hi! I'm somewhat new to EA - I'd heard of the ideas years ago but only started engaging with the community recently after doing the Intro to EA Virtual Program.
I work in International Tax Policy and am more sympathetic to neartermist causes such as global health and poverty reduction than longtermist ones.
I read a lot of non-fiction books and summarise them on my website, To Summarise.
I am particularly keen to meet other EAs in the policy space
Reach out to me if you want to have a chat about anything, really.
Upvoted with the benefit of hindsight bias.
In particular, I'm impressed with how these parts hit the nail on the head for recent events:
On average we are young and inexperienced. We have not yet experienced a scandal / major problem and have not yet started to think through how to avoid that happening again.
And:
Many EA organisations have very concentrated sources of funding. Donors therefore often fulfil the accountability and governance functions. Donors are analogous to customers, which can wield significant influence in for-profits.
I guess it depends on what we mean by "vetting funding". EA should definitely do more to understand and manage the nature and extent of the risks it is exposed to - i.e. general risk management. I don't think we need to wait for much more information to make such an assessment - the way this has unfolded with so many grant recipients, etc seeming to have been caught completely unprepared is enough evidence that the EA community's risk management and communication were lacking.
But some people also seem to suggest "vetting funding" means EA should be trying to find fraud or other malfeasance in its donors. (That is suggested by the OP's post and is what I meant by "vetting funding" in my previous comment). I'm less sure about this claim. It's not clear how much due diligence is required in these cases vs how much due diligence EA actually did. So this is something that, as the OP suggests, would benefit from more information before coming to a conclusion.
Edit: Put another way, I think there are two questions:
I've seen some discussions conflating the two. Even if the answer to the second is "no "or "unclear", EA's risk management practices could still be improved. That's all I'm saying.
I've commented on a separate post here.
In short: I'm not sure EA could have prevented or predicted this particular event with FTX blowing up.
However, EA did know that its funding sources were very undiversified and volatile, and could have thought more about the risks of an expected source of funding drying up and how to mitigate those risks.
Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money!
I've seen this mentioned quite a few times, most prominently by Eliezer Yudowsky . I take the point that there were sophisticated investors such as Sequioa, and BlackRock who researched the company and could not detect FTX's possible self-dealing with Alameda. I think it's fair to say that EA probably could not have detected this FTX situation would blow up in exactly the way it did, even with more due diligence.
I also think that it's rational to expect that you apply due diligence to where you are investing your money than where you are receiving it from - and my understanding is that EA (on the whole) was not actually investing in FTX.
However, what I think should not be lost sight of is that FTX funding made up a very significant amount of EA's funding as a whole: in 2021, it was estimated FTX team's funding made up $16.5 billion of $46.1 billion (roughly 36%). (Moskovitz's funding was even larger - roughly 49%.)
This is incredibly undiversified, especially given how volatile SBF and Moskovitz's wealth is . I am sure that this is far more undiversified than any large investor who actually put money in FTX. EA therefore stands to lose a lot more if the funding from FTX (or Moskovitz) fell away. I don't want this point to get lost in the debate.
Now, I'm not sure that the answer was that EA should have vetted its funding more. When people are offering you "free" money,[1] I don't think there is too much obligation to vet it (above any legal obligations that might exist). I think the answer is probably that EA should have thought about its risk exposure more, given how undiversified and volatile its funding is. In particular:
For example, I did see somewhere that there had been a statement somewhere suggesting that EAs personally should diversify away from crypto, given how exposed EA is to it generally, but that did not seem to be a very prominent, widely-advertised piece of advice.
I know also that there is some general career advice for people to build up a decent financial runway for themselves. Perhaps there should have been greater emphasis to the community that if they rely on funding (grants, salaries) that is undiversified, they should factor that risk in and weigh it with their personal risk appetite.
I leave aside the question of whether SBF was using EA to "launder" his reputation and therefore arguably the money was not entirely "free". I don't have an informed view on that.
Thanks for sharing. I've never heard of this Anki deck idea before and am intrigued. I have used Anki before for language learning. But I really like your idea of more deliberately "consuming" pleasant memories - I've often thought it's a shame how little we savour most of our nice memories.
If you don't mind me asking:
I agree with you that the original comment, taken literally, is probably false and that EAs consorting with billionaires can still retain some power.
But I think the original comment by Berta had a good point in that there seemed to be a general naivete by EA about power and other people's intentions. However, that is just from my vantage point as someone who does not work at an EA org and is not in any inner EA circle.
I think recent events have definitely lowered the general level of trust within the EA community. But that is not necessarily a good thing and I hope EA does not overcorrect, either. Getting the balance right will be tricky, but I think Berta was on the right track in that EA could benefit from thinking and talking about power more.
I've only just stumbled upon this question and I'm not sure if you'll see this, but I wrote up some of my thoughts on the problems with the Total View of population ethics (see "Abortion and Contraception" heading specifically).
Personally, I think there is a tension there which does not seem to have been discussed much in the EA forum.
Re: "I think many of these orgs have also dysfunctional aspects (e.g., I think most orgs are struggling with sexual harassment and concentration of formal and informal power)"
I agree with that, but I also think there's something to be learned from dysfunctional orgs. Why are they dysfunction? How did they become dysfunctional? Why have attempts to make them less dysfunctional failed?
There is just as much — possibly more — to be learned from failures as there is to learn from successes.