Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

34
17d
2
I'm concerned about the new terms of service for Giving What We Can, which will go into effect after August 31, 2024: This is a significant departure from the Effective Ventures' TOS (GWWC is spinning out of EV), which has users grant EV an unlimited but non-exclusive license to use feedback or suggestions they send, while retaining the right to do anything with it themselves. I've previously talked to GWWC staff about my ideas to help people give effectively, like a donation decision worksheet that I made. If this provision goes into effect, it would deter me from sharing my suggestions with GWWC in the future because I would risk losing the right to disseminate or continue developing those ideas or materials myself.
9
4d
The original website for Students for High Impact Charities (SHIC) at https://shicschools.org is down (You can find it in the Wayback Machine), but the program scripts and slides they used in high schools are still available at their google drive link at https://drive.google.com/drive/folders/0B_2KLuBlcCg4QWtrYW43UGcwajQ Could potentially be a valuable EA community building resource
35
21d
1
I had written up what I learned as a Manifund micrograntor a few months ago, but have never gotten around to polishing that for publication. Still, I think those reactions could be useful for people in the EA Community Choice program now. You've got the same basic pattern of a bunch of inexperienced grantmakers with a few hundred bucks to spend each and ~40-50 projects to look at quickly. I'm going to post those without much editing, since the program is fairly short. A few points are specific to the types of proposals that were in the microgranting experiment (which came from the ACX program). General Feedback for Grant Applicants [from ACX Microgrants Experience] Caution: This feedback is based on a single micrograntor's experience. It may be much less applicable to other contexts -- e.g., those involving larger grantors, grantors who do not need to evaluate a large number of proposals in a limited amount of time, or grantors who are in a position to fund a significant percentage of grants reviewed. I had pre-committed to myself that I would look at every single proposal unless the title convinced me that it was way too technical for me to understand. This probably affected my experience, and was done more for educational / information value reasons than anything else. * If you have a longer proposal, please start with an executive summary, limited to ~ 300 words. You may get only 2-3 minutes on an initial screen, maybe even less. * After getting a sense of the basic contours of the proposal, I found myself with a decent sense of where the weaker points probably were and wanted to see if these were clear dealbreakers in an efficient manner. Please be sure to red-team your proposal and address the weak points! * Use shorter paragraphs with titles or other clear, skimmable signals. As per above, I need to be able to quickly find your discussion on specific points. * One recurrent weakness involved an unclear theory of impact that I had to infer from the propo
25
17d
9
An idea that's been percolating in my head recently, probably thanks to the EA Community Choice, is more experiments in democratic altruism. One of the stronger leftist critiques of charity revolves around the massive concentration of power in a handful of donors. In particular, we leave it up to donors to determine if they're actually doing good with their money, but people are horribly bad at self-perception and very few people would be good at admitting that their past donations were harmful (or merely morally suboptimal). It seems clear to me that Dustin & Cari are particularly worried about this, and Open Philanthropy was designed as an institution to protect them from themselves. However, (1) Dustin & Cari still have a lot of control over which cause areas to pick, and sort of informally defer to community consensus on this (please correct me if I have the wrong read on that) and (2) although it was intended to, I doubt it can scale beyond Dustin & Cari in practice. If Open Phil was funding harmful projects, it's only relying on the diversity of its internal opinions to diffuse that; and those opinions are subject to a self-selection effect in applying for OP, and also an unwillingness to criticise your employer. If some form of EA were to be practiced on a national scale, I wonder if it could take the form of an institution which selects cause areas democratically, and has a department of accountable fund managers to determine the most effective way to achieve those. I think this differs from the Community Choice and other charity elections because it doesn't require donors to think through implementation (except through accountability measures on the fund managers, which would come up much more rarely), and I think members of the public (and many EAs!) are much more confident in their desired outcomes than their desired implementations; in this way, it reflects how political elections take place in practice. In the near-term, EA could bootstrap such a fun
69
2mo
4
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune. Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take! https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
62
3mo
12
I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing. It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes. I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it. EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again. EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.
110
1y
11
GET AMBITIOUS SLOWLY Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.  Faced with big dreams but unclear ability to enact them, people have a few options.  *  try anyway and fail badly, probably too badly for it to even be an educational failure.  * fake it, probably without knowing they're doing so * learned helplessness, possible systemic depression * be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.  * discover more skills than they knew. feel great, accomplish great things, learn a lot.  The first three are all very costly, especially if you repeat the cycle a few times. My preferred version is ambition snowball or "get ambitious slowly". Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures. I claim EA's emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago  What size of challenge is the right size? I've thought about this a lot and don't have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules: * stick to problems where failure will at least be informative. If you can't track reality well eno
96
1y
19
My overall impression is that the CEA community health team (CHT from now on) are well intentioned but sometimes understaffed and other times downright incompetent. It's hard to me to be impartial here, and I understand that their failures are more salient to me than their successes. Yet I endorse the need for change, at the very least including 1) removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions, 2) hiring HR and mental health specialists with credentials, 3) publicly clarifying their role and mandate.  My impression is that the most valuable function that the CHT provides is as support of community building teams across the world, from advising community builders to preventing problematic community builders from receiving support. If this is the case, I think it would be best to rebrand the CHT as a CEA HR department, and for CEA to properly hire the community builders who are now supported as grantees, which one could argue is an employee misclassification. I would not be comfortable discussing these issues openly out of concern for the people affected, but here are some horror stories: 1. A CHT staff pressured a community builder to put through with and include a community member with whom they weren't comfortable interacting. 2. A CHT staff pressured a community builder to not press charges against a community member who they felt harassed by. 3. After a restraining order was set by the police in place in this last case, the CHT refused to liaison with the EA Global team to deny access to the person restrained, even knowing that the affected community builder would be attending the event. 4. My overall sense is that CHT is not very mindful of the needs of community builders in other contexts. Two very promising professionals I've mentored have dissociated from EA, and rejected a grant, in large part because of how they were treated by the CHT. 5. My impression is that the CHT staff underm
Load more (8/97)