I appreciate this post, but pretty strongly disagree. The EA I've experienced seems to be at most a loose but mutually supportive coalition motivated by trying to most effectively do good in the world. It seems pretty far from being a monolith or from having unaccountable leaders setting some agenda.
While there certainly things I don't love such as treating EAGs as mostly opportunities to hang out and some things like MacAskill's seemingly very expensive and opaque book press tour, your recommendations seem like they would mostly hinder efforts to address the causes the community has identified as particularly important to work on.
For instance, they'd dramatically increase the transaction costs for advocacy efforts (i.e. most college groups) aimed at introducing people to these issues and giving them an opportunity to consider working on solving them. One of the benefits of EA groups is that it allows for a critical mass of people to become involved where there might not be enough interest to sustain clubs for individual causes (and again the costs of people needing to organize multiple groups). In effect, this would mostly just cede ground and attention to things like consulting, finance, and tech firms.
Similarly, we shouldn't discount the (imo enormous) value of having people (often very senior people) willing to offer substantial help/advice on projects they aren't involved with simply because the other person/group is part of the same community and legibly motivated for similar reasons. I can also see ways in which a loss of community would lead to reduced cooperation between orgs and competition over resources. It seems important to note too that being part of a cause-neutral community makes people more able to change priorities when new evidence/arguments emerge (as the EA community has done several times since I've been involved).
I think proposals of this kind really ought to be grounded in saying how the arguments the community has endorsed for some particular strategy are flawed, e.g. showing how community building is not in fact impactful. We generally seem to be over updating on a single failure (even allowing that the failure was particularly harmful).
Note: wrote this fairly quickly, so it's probably not the most organized collection of thoughts.
Writing since I haven't seen this mentioned elsewhere, but it seems like it might be a good idea to do (and announce that you are doing ) a rapid evaluation of grantee organizations that received a majority of their funding from FF in order to provide emergency funding to the most promising in order to avoid loss of institutions. If this is something OP plans on doing, it should do so quickly and unambiguously.
I'm imagining something like a potentially important org has lost its funding and employees will soon begin looking for and accepting other opportunities. If they do leave, it could be very difficult to get them back or find suitable replacements. If whole organizations cease operations, it could set back work in their areas substantially since momentum will be lost, future organizations will have to deal with answering why this similar org didn't work out, the ability to make credible commitments in the org's given field will be at risk if they suddenly drop projects, and institutional knowledge will be lost. Similar to how other countries supplemented employee salaries instead of the US's unemployment insurance approach during the pandemic.
Also for disclosure: I haven't received any FF funding nor work in an org that did.
I ran the UChicago x-risk fellowship this summer (we'd already started by the time I learned there was a joint ERI survey so decided to stick with our original survey form).
I just wanted to note that, for the fellows who weren't previously aware of x-risk, we observed a dramatic increase in how important fellows thought x-risk work was and their reported familiarity with x-risk. As well, many indicated in the written responses an intention to work on x-risk related topics in the future where they previously hadn't when responding to the same question. We exclusively advertised to UChicago students for this iteration and about 2/3 of our fellows were new to EA/x-risk.
A few questions mostly not relevant to me:
i) If I imagine I'm still leading a student group, a few things come to mind:
ii) For the Century Fellowship:
Funding private versions of Longtermist Political Institutions to lay groundwork for government versions
Some of the seemingly most promising and tractable ways to reduce short-termist incentives for legislators are Posterity Impact Assessments (PIA) and Futures Assemblies (see Tyler John's work). But, it isn't clear just how PIAs would actually work, e.g. what would qualify as an appropriate triggering mechanism, what evaluatory approaches would be employed to judge policies, how far into the future policies can be evaluated. It seems like it would be relatively inexpensive to fund an organization to do PIAs in order to build a framework which a potential in-government research institute could adopt instead of having to start from scratch. The precedent set by this organization seems like it would also contribute to reducing the difficulty of advocating for longtermist agency/research institutes within government.
Similarly, it would be reasonably affordable to run a trial Futures Assembly wherein a representative sample of a country's population is formed to deliberate over how and to what extent policy makers should consider the interests of future persons/generations. This would provide a precedent for potential government funded versions as well as a democratically legitimate advocate for longtermist policy decisions.
Basically, EAs could lay the groundwork for some of the most promising/feasible longtermist political institutions without first needing to get legislation passed.
I thought Open Phil's Criminal Justice Reform efforts would include work in this area and it seems they've done some research into this. Some links from a quick google for interested persons:
https://www.openphilanthropy.org/research/cause-reports/cannabis-policy
Hey, Zack from XLab here. I'd be happy to provide a couple sentence feedback on your application if you send me an email.
The most common reasons for rejection before an interview were things like no indication of having US citizenship or student visa, ChatGPT-seeming responses, responses to the exercise that didn't clearly and compellingly indicate how it was relevant for global catastrophic risk mitigation, or lack of clarity on how mission aligned the applicant was.
We appreciate the feedback, though.