This just came out in Current Affairs Magazine. It is a polemic, pretty hack-y, written from a bias in favour of socialism (as a better way of effecting change - at least for currently-alive humans). It has the usual out-of-context quotes of Ord, MacAskill, Bostrom, and cites Phil Torres and Timnit Gebru. One for the files/conversation on how to deal with external criticism.
But it had a couple of more substantive points:
- the dismissal of AGI x-risk is unhelpful but not surprising and there is probably little overlap between the alignment crowd and this magazine's readers (I got it through the FT's Alphaville blog which is really good) so I doubt it's actively harmful. I think the efforts to push back and make the case are good (though that isn't a consensus, see this post and comments). FWIW I tried to write my reasons for disagreeing with another alignment-skeptical tech commentator.
- the Erik Hoel essay is worth a read as it more rigorously examines EA as a philosophy while essentially agreeing with certain recommendations on how to behave in (Hoel's) life. See also this EAF post though there aren't many comments there atm.
- much of the factually-verifiable or changeable criticism of EA/longtermism/AI/etc revolves around the 'white male' critique. It would be great to have a set of statistics assessing this, if indeed EAs think it is actually an issue. For instance, I just did the AGI Safety Fundamentals course on both technical and governance tracks, and thought my cohorts were pretty diverse (one leader was non-white male, the other white female, and non-white participant % in the technical cohort was 50% and in the governance cohort 20%). In the alignment world, female thought-leaders seem well-represented (Ajeya Cotra, Katja Grace, Vanessa Kosoy, Beth Barnes off the top of my head)
- related to (3), I think the 'white male' thing (presumably a hangover of Ord, Bostrom, MacAskill, Russell, Tegmark, and Christian having written all the highest-profile works so far) might ease with time and a little effort - for instance, going around to magnet (pre-undergrad) schools with high POC representation in (say) SF, NY, London, Paris, etc., pitching AI x-risk (for example, as something students might find more obviously interesting and less abstract/contentious than longtermism/EA...engineered pandemics is another possibility). Obviously an earlier step is to develop a 'curriculum' or just an accessible talk that is politically acceptable in an educational environment, and groundwork with Ofsted or equivalent (US is more difficult as regulation is devolved to state/local level so probably fewer economies of scale).
- the ranking of climate change as a second-order problem is understandable (based upon my reading of Ord, MacAskill, or this post) but it isn't a good look given the general public's concern (which is obviously amplified for countries with relatively low income or developmental status, or simply in more expopsed geography). This 'bad look' might not matter much if EA isn't trying to grow, but it does seem to conflict with priority (no. 3 in this list) of building EA as a movement: like how do you get a broad, large, diverse group of people to care about EA while essentially telling some (substantial?) percentage of them (say in India, parts of China or South America) that the floods/crop failures/etc. happening in their countries are relatively less important. Especially if some of those students/people come from less well-off families, so aren't insulated from the social, economic tensions that result. Either you will get a) adherents who have certain moral views (which might of course be consistent with extreme utilitarianism), or b) will skew movement growth towards places/people that are less exposed to climate change or wealthy enough to deal with it. Again, it might not matter very much and be fully justified from a theoretical perspective, but it feels a bit weird in the court of public opinion (which unfortunately is where we live, and where policy actions are partially determined).
(Note that this comment is quick and not super well thought out. I hope to research and think about it more deeply at some point in the future, and maybe write it up in a better form).
As with many articles critical of EA, this article spends a while arguing against the early EA focus on earning to give:
It's a little frustrating to me that EA orgs and public figures have basically conceded this argument and tend to shy away from actively defending earning to give as a standard EA path. I think the utilitarian argument that the quoted graduate student was making is basically correct (with the need to properly account for one's career decision marginally impacting salaries in your given field, and whether one is likely to be a more effective worker than the person one is displacing). On the flip side, I think the deontological argument that NJR is making doesn't really hold up that well under scrutiny? Current Affairs is a print magazine, printing and mailing thousands of copies of it every month contributes to resource usage and climate change. NJR presumably is okay with this because he thinks that the benefits of educating and informing his readership exceed the harms of his resource usage. In the same way, I think working in a job that produces some negative harms can be okay if the net benefits of donating one's income substantially outweigh those harms. I think this gets even more stark when you try and actually think through the human scale of it all. Imagine having to tell ten thousand parents that the reason their kids won't get anti-malaria pills this year is that you working as a stock trader violates the categorical imperative. It sounds absurd, but that's the kind of thing we're talking about here.
Something that I do think I and NJR would agree on is that it's really screwed up that the world is in this situation to start with. There's something deeply unjust about a random American lawyer getting to decide whether people die from malaria based on their career and donation decisions. But we can't wave a magic wand and change that at the drop of a hat. And choosing to focus only on efforts to create systemic change means not getting lifesaving medicine to a ton of people who need it right now. I wish critics engaged more deeply with those really hard tradeoffs, and that EAs did a better job of articulating them. Just trying to sidestep the conversation about earning to give really undersells the moral challenge and stakes we're dealing with.
One thing that's sad and perhaps not obvious to people is that, as I understand it, Nathan Robinson was initially sympathetic to EA (and this played a role in at-times vocal advocacy for animals). I don't know that there's much to be done about this. I think the course of events was perhaps inevitable, but that's relevant context for other Forum readers who see this.
The discussion on Erik Hoel's piece is here:
https://forum.effectivealtruism.org/posts/PZ6pEaNkzAg62ze69/ea-criticism-contest-why-i-am-not-an-effective-altruist