(Sorry; I forgot to cross-post when I made this post)
Having recognized that I have asked these same questions repeatedly across a wide range of channels and have never gotten satisfying answers for them, I'm compiling them here so that they can be discussed by a wide range of people in an ongoing way.
- Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it? In my non-expert assessment, there are pros and cons to each decision; what made EV think the balance turned out in a particular direction?
- Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
- Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?
- Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
- Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?
- Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
- Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?
- Why is there a pattern of EA organizations renaming themselves (e.g. Effective Altruism MIT renaming to Impact@MIT)? What were seen as the pros and cons, and why did these organizations decide that the pros outweighed the cons?
- When they did rename, why did they choose to rename to relatively "boring" names that potentially aren't as good for SEO as one that more clearly references Effective Altruism?
- Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
- When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?
- Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?
I'm sorry if this is a bit disorganized, but I wanted to have them all in one place, as many of them seem related to each other.
Yeah, I heard about that. As far as I can tell, the reason it failed was for reasons specific to the particular implementation here, and not due to the broader idea of implementing a project like this. In addition, Duncan has on multiple occasions expressed support for the idea of running a similar project that can learn from the mistakes made here. So my question is, why haven't more organizations like that been started?