I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped.
Were these mostly situations in which EV had run into a major issue and then an outside expert was brought in? To the extent that the underlying developments that led to an issue came about from an EA / EV-insider way of thinking, I would expect significant performance costs associated with changing horses in midstream. So I wouldn't update much on the advisability of bringing in outside experts before a problem happens, or after a problem happens if the outside experts had played a role in setting up the underlying developments.
As a rough analogy, one can imagine a gridiron football offense that has been built (in terms of training, personnel, etc.) to align with a particular offensive strategy (e.g., the West Coast offense). If your team is set up that way, subbing in a key player whose skill set doesn't align to the previously chosen offensive strategy isn't usually going to work well in the short to medium run. This doesn't imply that the new player is bad -- just that your team has pre-committed to playing a particular offense. Ex ante, the new guy could have been the right player for your team contingent on your team having built a flexible enough system for him to work effectively in.
The Forum team can confirm / disconfirm, but the rationale suggests that having a fiscal sponsor who meets the US 501(c)(3) or UK registered charity requirement would be sufficient.
For a US 501c3, there's a lot of admin overhead to give to a non-501c3. I assume the same is broadly true in the UK. So the US/UK requirement makes sense if there's no 501c3/registered charity on the other end.
It's reasonable to conclude that if there are enough viable major sources of seed funding in an area, those sources make decisions independently enough of each other, and an organization has struck out with ~all of them, that is probably a signal that the org should not move forward with making a public appeal. I don't think we significantly disagree on what a public appeal would ideally look like, either.
I think points that may or may not be cruxes include:
Or, less flippantly, this seems to me what EA Funds and the other granting groups that give seed funding do.
Conditional on "Alice is reading Org's public appeal for funding and deciding whether to give," it seems that Alice has previously (and at least implicitly) decided not to defer to EA Funds or a similar organization for some reason or another.[1]
Of course, Alice could decide to ignore a randomly-selected community jury too! But:
A funding circle with a shared initial screening process is probably the closest extant analogue to what I was gesturing at.
Or a funding circle may not have a mechanism for someone to donate three, four, or maybe even low five figures.
So I like the for-profit approach as a model. Early in the life of your project you have a small number of high-context funders where you can put time into each funding relationship. As you scale, you "go public" and start also raising money from people you're not going to have conversations with.
I think that model works well in some circumstances, and certain appreciate the logic behind extending it to the non-profit world when that is the case. However, it's not the case that every potential founder or org has access to a "small number of high-context funders" who are in a position to support the early stages of the project without a public appeal. That means some of them are going to need to go public in a less developed state than would perhaps be ideal. Ability to self-fund, get support from one's family, or access to a good pre-existing network for fundraising do not strike me as strongly correlated with merit of either the founder, the org's theory, or the org itself. So I do have some concerns that expecting too much out of early-stage founders or ideas will give those (at most) weakly merit-based factors too much weigh in determining which founders, ideas, and orgs survive the infant-mortality period.
In general, I'd err on the side of encouraging public appeals rather than erring on the side of setting too high a bar. I think the average community member is pretty savvy, and the community's demonstrated deliberative skill in evaluating funding issues seems pretty strong. To the extent the community effort were too burdensome, I'd prefer something like people deferring somewhat to a ~randomly selected community screening jury (which could hopefully be at least medium-context) if the alternative were to discourage public appeals.
It changed generating a comment from something that would have probably taken 1.5 hours of work to something that took about 15 minutes and generated what I wanted to say.
Although I can't directly compare the ChatGPT version to a hypothetical directly-written version of the comment, my hunch is that the former is about twice as long as the latter as the latter would have been. It's pretty common for AI to need many more words to express the same idea than a reasonably skilled human author. So in a sense, I think generative AI use often shifts the time burden of the author-reader joint enterprise from the author to the readers. This may or may not be a good tradeoff on the whole, but it is worth considering both sides.
My general take is that content authored with that level of AI assistance should be flagged as such, so the reader can make their own decision about whether to engage with it.
I don't think most development economists would endorse the idea that a viable pathway exists for LDCs to escape the poverty trap based on ~$600-800MM/year in EA funding (even assuming you could concentrate all GH&D funding on a single project) and near-zero relevant political influence, either. And those are the resources that GH&D EA has on the table right now in my estimation.
To fund something at even the early stages, one needs either the ability to execute any resulting project or the ability to persuade those who do. The type of projects you're implying are very likely to require boatloads of cash, widespread and painful-to-some changes in the LDCs, or both. Even conditioned on a consensus within development economics, I am skeptical that EA has that much ability to get Western foreign aid departments and LDC politicians to do what the development economists say they should be doing.
The academic fields most relevant to GH&D work are fairly mature. Because of that, it's reasonable for GH&D to focus less on producing stuff that is more like basic research / theory generation (academia is often strong in this and had a big head start) and devote its resources more toward setting up a tractable implementation of something (which is often not academia's comparative advantage for various reasons).
GH&D also has a clearly successful baseline with near-infinite room for more funding, and so more speculative projects need to clear that baseline before they become viable. You haven't identified any specific proposed area to study, but my suspicion is that most of them would require sustained political commitment over many years in the LDC and/or large cash infusions beyond the bankroll of EA GH&D to potentially work.
FTX as a funding source also had plenty of non-fraudulent failure modes. Having "banked on receiving millions from FTX over the coming years" to the extent that not receiving those funds created a crisis seems like a serious misjudgment. That being said, it isn't clear to me the extent to which FTX's donation amounts would have tied into short-term fluctuations in crypto values.
The extent to which donations could be reallocated is unclear to me; it is possible for a donor to restrict donations to a specific purpose in a legally binding way. At least in some jurisdictions, those restrictions can often be binding even against the charity's creditors if the charity manages its finances correctly.
I read Zach to mean that projects need to have enough funding on hand to shut down in an orderly enough way -- which includes a way that does not create problems for sister projects -- in a near-worst case scenario. This could be a problem, for instance, if a project had financial commitments that bound EV but could not be satisfied out of resources allocated to the project.
There are, however, limits on what good financial controls can do for you if there's a massive funding shortfall and/or a massive unplanned liability. If (e.g.) a 50% revenue loss (not of a short-term nature) wouldn't seriously disrupt a charity's work, then that charity is probably too conservative on its spending or is raising excessive amounts of money that should go elsewhere.