When people think about the history of effective altruism as a memeplex, they generally assume that it developed into a coherent philosophy around the time of Giving What We Can’s founding in 2009. Of course, you could point to moral philosophers like Bentham, Singer, or Unger whose ideas naturally implied EA, but they never fully laid the conceptual groundwork for EA methodology. You could also point to altruists who were highly effective, but who didn’t develop a detailed EA worldview or methodology. To be sure, the idea of altruism was not new, nor was the idea of effectiveness, but the unique combination—emphasizing principles of cause neutrality, epistemic and instrumental rationality, quantitative analysis, and the importance of considering counterfactuals—had never been seen before. Or had it?
Enter the work of Gordon Irlam. Irlam has a varied resume which includes software engineering at Google in 2004, working as a grad student in a malaria research lab in 2005, and most recently self-study in artificial intelligence. He also runs a small charitable foundation that has donated over $1.7 million to charities, mostly working on developing world health/poverty and global catastrophic risks. But what I would like to highlight is his essay “Making a difference,” which does not list its creation date but was last edited in January 2004.
The similarities between this 2004 essay and modern EA philosophy are uncanny. The article begins by discussing the difficulty of attributing counterfactual impact, and then goes into a very detailed discussion of replaceability, similar to what would later be seen in William MacAskill’s 2014 paper “Replaceability, Career Choice, and Making a Difference”. Here is one quote:
We each seek to exercise our free will in such a way as to maximize our preferred utility function of the world. What makes this difficult is the interlinking of any action we might take with the action of others. For instance, if somebody accepts a job working as a youth counsellor, offsetting the good that might be done, is the loss of good the next best candidate would have contributed. Taking the job causes things to ripple down the line, as they in turn, displace somebody else from some other job, and so on.
While there are earlier predecessors to the idea of “earning to give”, Irlam provides the clearest pre-EA argument I have seen:
Suppose you have a skill that is highly valued by employers, but you lack skills highly valued with respect to your utility function. Then, one option that makes a lot of sense is to take a high paying job that is neutral with respect to your utility function, and to donate much of what you earn to an organization that works on what you care about. This allows you to translate the skill you don't value into being effectively highly skilled at what you care about. You will undoubtedly be able to achieve more by working in this fashion than working on the issues you care about directly.
The article concludes by discussing the pivotal role one person, Viktor Zhdanov, played in the eradication of smallpox. This is eerily similar to the way EAs often talk about Stanislav Petrov or Vasili Arkhipov. William MacAskill would later write an article praising Zhdanov in 2015.
Irlam did more than theorize about the economics of doing good. He also tried to put these principles into action by developing his “Back of the Envelope Guide to Philanthropy” which compares various philanthropic causes according to their “leverage factor”, which refers to the cost-effectiveness of the interventions as opposed to relying on overhead ratio. According to the copyright notice, this project was started in 2005, but the earliest archive on the Wayback Machine is from 2008. For comparison, GiveWell was launched in 2007. In other words, it appears that Irlam independently discovered cause prioritization. At some point between 2011 and 2013, AI safety was added to the top of the list of causes.
I’m not saying Gordon Irlam is the earliest person to come up with these EA ideas, or even the earliest to write them down. For all I know, there’s an obscure economics paper or Usenet post from decades earlier that is even more uncannily similar to modern EA. Regardless of this possibility, I think Irlam deserves some recognition for his accomplishments.
H/T to Issa Rice for pointing me to Gordon Irlam, and to Matthew Barnett for proofreading and editing this post.
Completely agree! I'd also emphasise some really important early donations to Giving What We Can and GCRI. From https://www.gricf.org/annual-report.html
"Summarizing the funding provided by the foundation for 2000-2019:
RESULTS Educational Fund - $682,603 (39%)
Global Catastrophic Risk Institute (c/o Social & Environmental Entrepreneurs) - $326,043 (19%)
Keep Antibiotics Working (c/o Food Animal Concerns Trust) - $135,000 (8%)
Institute for One World Health - $123,100 (7%)
Future of Humanity Institute (c/o Americans for Oxford Inc) - $120,000 (7%)
Knowledge Ecology International - $100,000 (6%)
Health GAP - $66,000 (4%)
Machine Intelligence Research Institute - $55,000 (3%)
Giving What We Can (c/o Centre for Effective Altruism USA Inc) - $50,000 (3%)
Kids International Dental Services - $24,000 (1%)
Total - $1,735,558.04 (100%) "
I agree that Gordon deserves great praise and recognition!
One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.) So that discussion shouldn't be seen as independent convergence.
Thanks for this post. Besides due recognition, I think that studying people who professed EA ideas before the movement began may provide insights on, e.g., what prevented these ideas from spreading before, what shortcomes they faced, what actually worked, etc.
See also Gordon Irlam on the BEGuide, an interview in an EA/EA-adjacent blog from 2014