Blog at The Good Blog https://thegoodblog.substack.com/
To be clear, I'm not at all recommending changing one's beliefs here. My language of gut belief vs cognitive beliefs was probably too imprecise. I'm recommending that, for some people, particularly if one is able to act on beliefs one doesn't intuitively feel, it's better not to try to intuitively feel those beliefs.
For some people, this may come at a cost to their ability to form true beliefs, and this is a difficult tradeoff. For me, I think, all things considered, intuiting beliefs has made me worse at forming true beliefs.
My experience with Atlas fellows (although there was substantial selection bias involved here) is that they're extremely high calibre.
I also think there's quite a lot of friction in getting LTFF funding - it takes quite a long time to come through I think is the main one. I think there are quite large benefits to being able to unilaterally decide to do some project and having the funding immediately available to do it.
Yeah, I'm pretty sceptical of the judgement of experienced community builders on the sorts of questions like effect of different strategies on community epistemics. I think if I frame this as an intervention "changing community building in x way will improve EA community epistemic" I have a strong prior that it has no effect because most interventions people try to have no or small effect (see famous graph of global health interventions.)
I think the following are some examples of places where you'd think people would have good intuitions about what works well but they don't
I agree ideally one would do gut stuff right both practically and epistemically. In my case, the tradeoff of productivity loss and loss in general reasoning ability in exchange for some epistemic gains wasn't worth it.
I think it's plausible that for people in a similar situation to me - people who are good at making decisions based on just analytic reasoning and have reason to think that they might be vulnerable if they were to try to believe things on a gut level as well as an analytic one - should consider not engaging certain EA topics on a gut level (I don't restrict this to AI safety - I know people who've had similar reactions thinking about nuclear risk and I've personally made the decision not to think about s-risk or animal welfare on a gut level either.)
I do want to emphasise that there was a tradeoff here - I think I have somewhat better AI safety takes as a result of thinking about AI safety on a gut level. The benefit though was reasonably small and not worth the other costs from an impartial welfareist perspective.