Thank you for writing this - a lot of what you say here resonates strongly with me, and captures well my experience of going from very involved in EA back in 2012-14 or so, to much more actively distancing myself from the community for the last few years. I've tried to write about my perspective on this multiple times (I have so many half written Google docs) but never felt quite able to get to the point where I had the energy/clarity to post something and actually engage with EA responses to it. I appreciate this post and expect to point people to it sometimes when trying to explain why I'm not that involved in or positive about EA anymore.
I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).
I think I naturally interpret a downvote as meaning "I think this post/comment isn't helpful and I generally want to discourage posts/comments like it." That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit concerned about the CEO of CEA seeming to discourage posts which suggest EA should be doing things differently, especially where they are reasonable and constructive like this one.
This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community). I haven't spent a lot of time on this forum recently so I'm wondering if other people think the norms around up/downvoting are different to my interpretation, and in particular whether Max you meant to use it differently?
[EDIT: I checked the norms on up/downvoting, which say to downvote if either "There’s an error", or "The comment or post didn’t add to the conversation, and maybe actually distracted." I personally think this post added something useful to the conversation about the scope and focus of EA, and it seems harsh to downvote it because it conflated a few different dimensions - and that's why Max's comment seemed a bit harsh/dismissive to me]
Firstly, I very much appreciate the grant made by the LTF Fund! On the discussion of the paper by Stephen Cave & Seán Ó hÉigeartaigh in the addenda, I just wanted to briefly say that I’d be happy to talk further about both: (a) the specific ideas/approaches in the paper mentioned, and also (b) broader questions about CFI and CSER’s work. While there are probably some fundamental differences in approach here, I also think a lot may come down to misunderstanding/lack of communication. I recognise that both CFI and CSER could probably do more to explain their goals and priorities to the EA community, and I think several others beyond myself would also be happy to engage in discussion.
I don’t think this is the right place to get into that discussion (since this is a writeup of many grants beyond my own), but I do think it could be productive to discuss elsewhere. I may well end up posting something separate on the question of how useful it is to try and “bridge” near-term and long-term AI policy issues, responding to some of Oli’s critique - I think engaging with more sceptical perspectives on this could help clarify my thinking. Anyone who would like to talk/ask questions about the goals and priorities of CFI/CSER more broadly is welcome to reach out to me about that. I think those conversations may be better had offline, but if there's enough interest maybe we could do an AMA or something.
I'd be keen to hear a bit more more about the general process used for reviewing these grants. What did the overall process look like? Were participants interviewed? Were references collected? Were there general criteria used for all applications? Reasoning behind specific decisions is great, but also risks giving the impression that the grants were made just based on the opinions of one person, and that different applications might have gone through somewhat different processes.
Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you've said I do think that more time needs to be going into these grant reviews.
It don't think it's unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn't your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that's the case, then I think there's a bigger problem (the job isn't being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
I'm pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know - being able to do this should be a requirement of the job, IMO. Seconding Peter's question below, I'd be keen to hear if there are any plans to make progress on this.
If you really don't have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.
This may be a bit late, but: I'd like to see a bit more explanation/justification of why the particular grants were chosen, and how you decided how much to fund - especially when some of the amounts are pretty big, and there's a lot of variation among the grants. e.g. £60,000 to revamp LessWrong sounds like a really large amount to me, and I'm struggling to imagine what that's being spent on.
Did SlateStarCodex even exist before 2009? I'm sceptical - the post archives only go back to 2013: http://slatestarcodex.com/archives/. Maybe not a big deal but does suggest at least some of your sample were just choosing options randomly/dishonestly.
If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.
I strongly agree with this, and I hadn't heard anyone articulate it quite this explicitly - thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn't always "use this CFAR technique."
(I think CFAR are great and a lot of their techniques are really useful. But I've also spent a bunch of time feeling bad the fact that I don't seem able to learn and implement these techniques in the way many other people seem to, and it's taken me a long time to realise that trying to 'figure out' how to fix my problems in a very analytical way is very often not what I need.)
Thanks Peter - I continue to feel unsure whether it's worth the effort for me to do this, and am probably holding myself to an uncecessarily high standard, but it's hard to get past that. At the same time, I also haven't been able to totally give up on the idea of writing something either - I do have a recent draft I've been working on that I'd be happy to share with you.
I thought about the criticism contest, but I think trying to enter that creates the wrong incentives for me. It makes me feel like I need to write a super well-reasoned and evidenced critique which feels too high a bar and if I'm going to write anything, something that I can frame more as my own subjective experience feels better. Also, if I entered and didn't win a prize I might feel more bitter about EA, which I'd rather avoid - I think if I'm going to write something it needs to be with very low expectations about how EAs are going to respond to it.