This straightforwardly got the novel coronavirus (now "covid-19") on the radar of many EAs who were otherwise only vaguely aware of it, or thought it was another media panic, like bird flu.
The post also illustrates some of the key strengths and interests of effective altruism, like quantification, forecasting, and ability to separate out minor global events from bigger ones.
Eight years later, I still think this post is basically correct. My argument is more plausible the more one expects a lot of parts of society to play a role in shaping how the future unfolds. If one believes that a small group of people (who can be identified in advance and who aren't already extremely well known) will have dramatically more influence over the future than most other parts of the world, then we might expect somewhat larger differences in cost-effectiveness.
One thing people sometimes forget about my point is that I'm not making any claims ab...
This was the single most valuable piece on the Forum to me personally. It provides the only end-to-end model of risks from nuclear winter that I've seen and gave me an understanding of key mechanisms of risks from nuclear weapons. I endorse it as the best starting point I know of for thinking seriously about such mechanisms. I wrote what impressed me most here and my main criticism of the original model here (taken into account in the current version).
This piece is part of a series. I found most articles in the series highly informative, but this particula...
I think this is one of the best pieces of EA creative writing of all time.
Since writing this post, I have benefited both from 4 years of hindsight, and also significantly more grantmaking experience with just over a year at the long-term future fund. My main updates:
Stuff I'd change if I were rewriting this now:
This post is pushing against a kind of extremism, but it might push in the wrong direction for some people who aren't devoting many resources to altruism. It's not that I think people in general should be...
Excellent and underrated post. I actually told Greg a few years ago that this has become part of my cognitive toolkit and that I use this often (I think there are similarities to the Tinbergen Rule - a basic principle of effective policy, which states that to achieve n independent policy targets you need at at least n independent policy instruments).
This tool actually caused me to deprioritize crowdfunding with Let's Fund, which I realized was doing a multiobjective optimization problem (moving money to effective causes and doing re...
The most common critique of effective altruism that I encounter is the following: it’s not fair to choose. Many people see a fundamental unfairness in prioritizing the needs of some over the needs of others. Such critics ask: who are we to decide whose need is most urgent? I hear this critique from some on the left who prefer mutual aid or a giving-when-asked approach; from some who prefer to give locally; and from some who are simply uneasy about the idea of choosing.
To this, I inevitably reply that we are always choosing. When we give money only to...
I would like to suggest that Logarithmic Scales of Pleasure and Pain (“Log Scales” from here on out) presents a novel, meaningful, and non-trivial contribution to the field of Effective Altruism. It is novel because even though the terribleness of extreme suffering has been discussed multiple times before, such discussions have not presented a method or conceptual scheme with which to compare extreme suffering relative to less extreme varieties. It is meaningful because it articulates the essence of an intuition of an aspect of life that deeply matters to ...
There are many reasons why I think this post is good:
Key points
I think this post contributes something novel, nontrivial, and important, in how EA should relate to economic growth, "Progress Studies," and the like. Especially interesting/cool is how this post entirely predates the field of progress studies.
I think this post has stood the test of time well.
For a long time, I've believed in the importance of not being alarmist. My immediate reaction to almost anybody who warns me of impending doom is: "I doubt it". And sometimes, "Do you want to bet?"
So, writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, "This really seems like something someone should be ringing the alarm bells about." But for a while, very few people were predicting anything big on respectable f...
Longtermism and animal advocacy are often presented as mutually exclusive focus areas. This is strange, as they are defined along different dimensions: longtermism is defined by the temporal scope of effects, while animal advocacy is defined by whose interests we focus on. Of course, one could argue that animal interests are negligible once we consider the very long-term future, but my main issue is that this argument is rarely made explicit.
This post does a great job of emphasizing ways in which animal advocacy should inform our efforts to improve the ver...
As I nominate this, Holden Karnofsky recently wrote about "Minimal Trust Investigations" (124 upvotes), similar to Epistemic Spot Checks. This post is an example of such a minimal trust investigation.
The reason why I am nominating this post is that
That said, as other commenters point out, the post could perhaps use a re-write. Perhaps this decade review would be a good t...
As an employer, I still think about this post three years after it was published, and I regularly hear it referenced in conversations about hiring in EA. The experiences in it clearly resonated with a lot of people, as evidenced by the number of comments and up votes. I think it's meaningfully influenced the frame of many hiring rounds at EA organizations over the past three years.
Cool Earth was the EA community's default response to anyone who wanted to donate to climate change for years, without particularly good reason. Sanjay's work overturned that recommendation and shortly after more rigorous recommendations were published.
Disclaimer: this is an edited version of a much harsher review I wrote at first. I have no connection to the authors of the study or to their fields of expertise, but am someone who enjoyed the paper here critiqued and in fact think it very nice and very conservative in terms of its numbers (the current post claims the opposite). I disagree with this post and think it is wrong in an obvious and fundamental way, and therefore should not be in decade review in the interest of not posting wrong science. At the same time it is well-written and exhibits a good ...
Summary: I think the post mostly holds up. The post provided a number of significant, actionable findings, which have since been replicated in the most recent EA Survey and in OpenPhil’s report. We’ve also been able to extend the findings in a variety of ways since then. There was also one part of the post that I don’t think holds up, which I’ll discuss in more detail.
The post highlighted (among other things):
This was one of many posts I read as I was first getting into meta EA that was pretty influential on how I think about things. It was useful in a few different ways:
1. Contextualising a lot of the other posts that were published around the same time, written in response to the "It's hard to get an EA job" post.
2. Providing a concrete model of action with lots of concrete examples of how to implement a hierarchical structure
3. I've seen the basic argument for more management made many times over the last few years in various specific contexts. W...
I come back to this post quite frequently when considering whether to prioritize MCE (via animal advocacy) or AI safety. It seems that these two cause areas often attract quite different people with quite different objectives, so this post is unique in its attempt to compare the two based on the same long-term considerations.
I especially like the discussion of bias. Although some might find the whole discussion a bit ad hominem, I think people in EA should take seriously the worry that certain features common in the EA community (e.g., an attraction towards abstract puzzles) might bias us towards particular cause areas.
I recommend this post for anyone interested in thinking more broadly about longtermism.
This post significantly adds to the conversation in Effective Altruism about how pain is distributed. As explained in the review of Log Scales, understanding that intense pain follows a long-tail distributions significantly changes the effectiveness landscape for possible altruistic interventions. In particular, this analysis shows that finding the top 5% of people who suffer the most in a given medical condition and treating them as the priority will allow us to target a very large fraction of the total pain such a condition generates. In the case of clus...
This was useful pushback on the details of a claim that is technically true, and was frequently cited at one point, but that isn't as representative of reality as it sounds.
EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don't understand our values nor aren't very sure about how to understand them much better, reliably. Zoe's post highlights that it's too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.
(My thanks to the post authors, velutvulpes and juliakarbing, for transcribing and adding a talk to the EA Forum, comments below refer to the contents of the talk).
I gave this a decade review downvote and wanted to set out why.
Reinventing the wheel
I think this is on the whole a decent talk that sets out an personal individual's journey through EA and working out how they can do the most good.
However I think the talk involves some amount of "reinventing the wheel" (ignoring and attempting to duplicate existing research).
In the ...
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
Might be one of the best intros to EA?
I think the post made some important but underappreciated arguments at the time, especially for high stakes countries with more cultural differences, such as China, Russia, and Arabic speaking countries. I might have been too negative about expanding into smaller countries that are culturally closer. I think it had some influence too, since people still often ask me about it.
One aspect I wish I'd emphasised more is that it's very important to expand to new languages – my main point was that the way we should do it is by building a capable, native-language ...
I see Gwern's/Aaron's post about The Narrowing Circle as part of an important thread in EA devoted to understanding the causes of moral change. By probing the limits of the "expanding circle" idea for counterexamples, perhaps we can understand it better.
Effective altruism is popular among moral philosophers, and EAs are often seeking to expand people's "moral circle of concern" towards currently neglected classes of beings, like nonhuman animals or potential future generations of mankind. This is a laudable goal (and one which I share), but it'...
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important, but which I don't have time to re-read or say very nuanced things about.]
I think this post makes good points on a really important topic.
I also think it's part of what is maybe the best public debate on an important topic in EA (with Will's piece). I would like to see more public debate on important topics, so I also feel good about including this in the review to signal-boost this sort of public debate.
Overall, I think this is worth including in the review.
This is the most foundational, influential, and classic article in effective altruism. It's fitting that the Review coincides with this article's 50th anniversary. This article defends and discusses the proposition
If it is in our power to prevent something very bad from happening, without thereby sacrificing anything else morally significant, we ought, morally, to do it.
This proposition sounds quite reasonable and simultaneously has broad implications. Singer continues:
...The uncontroversial appearance of the principle just stated is deceptive. If it we
When I wrote this post back in 2014, the effective altruism movement was very different than it is today and I think this post was taken more seriously than I wanted it to be. I think generally speaking at the time, the EA movement was not yet taken seriously and I think it needed to appear a bit more "normal" and appeal more to mainstream sensibilities and get credibility / traction. But now, in 2022, I think the EA movement unequivocally has credibility / traction and the times now call for "keeping EA weird" - the movement has enough sustaining power no...
I'm surprised to see that this post hasn't yet been reviewed. In my opinion, it embodies many of the attributes I like to see in EA reports, including reasoning transparency, intellectual rigor, good scholarship, and focus on an important and neglected topic.
IIRC a lot of people liked this post at the time, but I don't think the critiques stood up well. Looking back 7 years later, I think the critique that Jacob Steinhardt wrote in response (which is not on the EA forum for some reason?) did a much better job of identifying more real and persistent problems:
...
- Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.
- Over-confident claims coupled with insufficient background research.
- Over-reliance on a small set o
This post is a really good example of an EA organisation being open and clear to the community about what it will and will not do.
I still have disagreements about the direction taken (see the top comment of the post) but I often think back to this post when i think about being transparent about the work I am doing and overall I think it is great for EA orgs to write such posts and I wish more groups would do so.
This post takes a well-known story about impact (smallpox eradication), and makes it feel more visceral. The style is maybe a little heavy-handed, but it brought me along emotionally in a way that can be useful in thinking about past successes. I'd like to see somewhat more work like this, possibly on lesser-known successes in a more informative (but still evocative) style.
I considered Evan Williams' paper one of the most important papers in cause prioritization at the time, and I think I still broadly buy this. As I mention in this answer, there are at least 4 points his paper brought up that are nontrivial, interesting, and hard to refute.
If I were to write this summary again, I think I'd be noticeably more opinionated. In particular, a key disagreement I have with him (which I remember having at the time I was making the summary, but this never making it into my notes) is on the importance of the speed of moral progress v...
This topic seems even more relevant today compared to 2019 when I wrote it. At EAG London I saw an explosion of initiatives and there is even more money that isn't being spent. I've also seen an increase in attention that EA is giving to this problem, both from the leadership and on the forum.
Increase fidelity for better delegation
In 2021 I still like to frame this as a principal-agent problem.
First of all there's the risk of goodharting. One prominent grantmaker recounted to me that back when one prominent org was giving out grants, people would jus...
In How to generate research proposals I sought to help early career researchers in the daunting task of writing their first research proposal.
Two years after the fact, I think the core of the advice stands very well. The most important points in the post are:
None of this is particularly original. The value I added is collecting all the advice in a ...
I thought this post was a very thoughtful reflection of SHIC and what went wrong in approaching highschoolers for EA outreach, which is made all the more interesting given that as of 2021 high school outreach is now a pretty sizable effort of EA movement building. SHIC in many ways was too ahead of its time. I hope that the lessons learned from SHIC have made current high school outreach attempts more impactful in their execution.
Disclaimer: I am on the board of Rethink Charity which SHIC was a part of at the time of this post, but I am writing this review...
This post influenced my own career to a non-insignificant extent. I am grateful for its existence, and think it's a great and clear way to think about the problem. As an example, this model of patient spending was the result of me pushing the "get humble" button for a while. This post also stands out to me in that I've come back to it again and again.
If I value this post at 0.5% of my career, which I ¿do? ¿there aren't really 200 posts which have influenced me that much?, it was worth 400 hours of my time, or $4,000 to $40,000 of my money. I probably...
This post is really approximate and lightly sketched, but at least it says so. Overall I think the numbers are wrong and the argument is sound.
Synthesising responses:
Industry is going to be a bigger player in safety, just as it's a bigger player in capabilities.
My model could be extremely useful if anyone could replace the headcount with any proxy of productivity on the real problem. Any proxy at all.
Doing the bottom up model was one of the most informative parts for me. You can cradle the whole field in the palm of your mind. It is a small and pr
This is my favorite introduction to existential risk. It's loosely written from the perspective of global policy, but it's quite valuable for other approaches to existential risk as well. Topics discussed (with remarkable lucidity) include:
I think this post mostly stands up and seems to have been used a fair amount.
Understanding roughly how large the EA community seems moderately fairly, so I think this analysis falls into the category of 'relatively simple things that are useful to the EA community but which were nevertheless neglected for a long while'.
One thing that I would do differently if I were writing this post again, is that I think I was under-confident about the plausible sampling rates, based on the benchmarks that we took from the community. I think I was understandably un...
This made me more likely to give non tax-deductibily, and gives a useful resource to link to for other people
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
This is a concept that's relatively frequently referred back to, I think? Which seems like a reason to include it.
I think it's pointing to a generally important dynamic in moral debates, though I have some worry that it's a bit in "soldier" mindset, and might be stronger if it also tried to think through the possible strengths of this sort of interpretation. I'm also not sure q...
The EA forum is one of the key public hubs for EA discourse (alongside, in my opinion, facebook, twitter, reddit and a couple of blogs). I respect the forum team's work in trying to build better infrastructure for its users.
The EA forum is active in attempting to improve experience for its users. This makes it easier for me to contribute with things like questions, short forms, sequences etc, etc.
I wouldn't say this post provides deep truth, but it seeks to build infrastructure which matches the way EAs are. To me, that's an analogy to articles which...
I think this talk, as well as Ben's subsequent comments on the 80k podast, serve as a good illustration of the importance of being clear, precise, and explicit when evaluating causes, especially those often supported by relatively vague analogies or arguments with unstated premises. I don't recall how my views about the seriousness of AI safety as a cause area changed in response to watching this, but I do remember feeling that I had a better understanding of the relevant considerations and that I was in a better position to make an informed assessment.
I reviewed this post four months ago, and I continue to stand by that review.
This post, alongside Julia's essay "Cheerfully," are the posts I most often recommend to other EAs.
This was a very practical post. I return to it from time to time to guide my thinking on what to research next. I suggest it to people to consider. I think about ways to build on the work and develop a database. I think that it may have helped to catalyse a lot of good outcomes.
I think this research into x-risk & economic growth is a good contribution to patient longtermism. I also think that integrating thoughts on economic growth more deeply into EA holds a lot of promise -- maybe models like this one could someday form a kind of "medium-termist" bridge between different cause areas, creating a common prioritization framework. For both of these reasons I think this post is worth of inclusion in the decadal review.
The question of whether to be for or against economic growth in general is perhaps not the number-on...
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I think a lot of EAs (including me) believe roughly this, and this is one of the best summaries out there.
At the same time, I'm not sure it's a core EA principle, or critical for arguing for any particular cause area, and I hesitate about making it canonical.
But it seems plausible that we should include this.
This continues to be one of the most clearly written explanations of a speculative or longtermist intervention that I have ever read.
I think this report is still one of the best and most rigorous investigations into which beings are moral patients. However, in the five years since it's been published it's influenced my thinking less than I had expected in 2017 – basically, few of my practical decisions have hinged on whether or not some being merits moral concern. This is somewhat idiosyncratic, and I wouldn't be surprised if it's had more of an impact on e.g. those who work on invertebrate welfare.
I first listened to Wildness in February 2021. This podcast was my introduction to wild animal welfare, and it made me reconsider my relationship to environmentalism and animal welfare. I've always thought of myself as an environmentalist, but I never really considered what I valued in the environment. But after listening to this, my concern for "nature" became more concrete: I realized that the well-being of individual wild animals was important, and because there could be trillions of sentient wild animals, extremely so. I especially liked the third episode, which asks tough questions about who and what nature is "for."
This post made talking about diversity in EA less of a communication minefield and I am very grateful for that.
This post was helpful to me in understanding what I should aim to accomplish with my own personal donations. I expect that many other EAs feel similarly -- donating is an important part of being an EA for many people, but the question of how to maximize impact as a small-scale individual donor is a complex puzzle when you consider the actions of other donors and the community as a whole. This post is a clear, early articulation of key themes that show up in the continual debate and discussion that surround real-world individual donation decisio...
I thought this post was interesting, thoroughly researched and novel. I don't really recall if I agree with the conclusion but I remember thinking "here's a great example of what the forum does well - a place for arguments about cause prioritisation that belong nowhere else"
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I think this is one of the best introductions to the overall EA mindset.
I think this line of thinking has influenced a lot of community building efforts, mostly for better. But it feels a bit inside-baseball: I'm not sure how much I want meta pieces like this highlighted in "best of EA's first decade".
This essay had a large effect on me when I read it early on in my EA Journey. It's hard to assign credit, but around the time I read it, I significantly raised my "altruistic ambition", and went from "I should give 10%" to "doing good should be the central organizing principle of my life."
I know many smart people who disagree with me, but I think this argument is basically sound. And it has had, for me anyway, formed a healthy voice in my head pushing me towards strong conviction.
Congratulations on a very interesting piece of work, and on the courage to set out ideas on a topic that by its speculative nature will draw significant critique.
Very positive that you decided on a definition for "civilizational collapse", as this is broadly and loosely discussed without the associated use of common terminology and meaning.
A suggested further/side topic for work on civilizational collapse and consequences is more detailed work on the hothouse earth scenario (runaway cliamte change leading to 6C+ warming + ocean chemistry change...
This post highlighted an important problem that would have taken much longer to address otherwise. I would point to this post as an example of how to hold powerful people accountable in a way that is fair and reasonable.
(Disclosure: I worked for CEA when this post was published)
This had a large influence on how I view the strategy of community building for EA.
This was popular, but I'm not sure how useful people found it, and it took a lot of time. I hoped it might become an ongoing feature, but I couldn't find someone able to and willing to run it on an ongoing basis.
Most people who know about drugs tend to have an intuitive model of drug tolerance where "what goes up must come down". In this piece, the author shows that this intuitive model is wrong, for drug tolerance can be reversed pharmacologically. This seems extremely important in the context of pain relief: for people who simply have no option but to take opioids to treat their chronic pain, anti-tolerance would be a game-changer. I sincerely believe this will be a paradigm shift in the world of pain management, with a clear before-and-after cultural shift arou...
I personally know of thousands of extra pounds going to AMF because of this post.
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I think this still might be the best analysis of an important question for longtermism? But it is also a bit in-the-weeds.
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
If we include Helen's piece, I think it might be worth including both this and the first comment on this post, to show some other aspects of the picture here. (I think all three pieces are getting at something important, but none of them is a great overall summary.)
There can never be too many essays recommending EAs not to push themselves past their breaking point. This essay may not be the most potent take on this concept, but since there are bound to be some essays on optimization-at-all-costs among the most-upvoted EA essays, there should be some essays like this one to counterbalance that. For instance, this essay is an expanded take on the concept, but is too new to be eligible for this year's EA Review.
I think that it's generally useful to share a clear paradigm which is useful for non-experts, based on deep knowledge of a subject, and that's what I tried to do here. In this case, I think that the concept and approach are a very generally useful point, and I would be very excited for more people to think about Value of Information more often, though as the post notes, mostly informally.
Apparently this post has been nominated for the review! And here I thought almost no one had read it and liked it.
Reading through it again 5 years later, I feel pretty happy with this post. It's clear about what it is and isn't saying (in particular, it explicitly disclaims the argument that meta should get less money), and is careful in its use of arguments (e.g. trap #8 specifically mentions that counterfactuals being hard isn't a trap until you combine it with a bias towards worse counterfactuals). I still agree that all of the traps mentioned here are ...
I don't have time to write a detailed self-review, but I can say that:
Focusing on tax-deductibility too much can be a trap for everyday donors, including myself. I keep referring to this article to remind my peers or myself of that.
One piece of information is not mentioned: At least in some countries, donating to a not-tax-deductible charity may be subject to gift tax. I recommend that you check out if this applies to you before you donate . But even then the gift tax can be well worth paying.
This post helped clarify to me which causes ought to be prioritized from a longtermist standpoint. Although we don't know the long-term consequences of our actions (and hence are clueless), we can take steps to reduce our uncertainties and reliably do good over the long term. These include:
This approach seems to be being neglected by GiveWell, and not taken up by others in this space. (I don't have time to write a full review).
This post seems to have started a conversation on diversity in EA:
I think the approach taken in this post is still good: make the case that extinction risks are too small to ignore and neglected, so that everyone should agree we should invest more in them (whether or not you're into a longtermism).
It's similar to the approach taken in the Precipice, though less philosophical and longtermist.
I think it was a impactful post in that it was 80k's main piece arguing in favour of focusing more of existential risk during a period when the community seems to have significantly shifted towards focusing on those risks, and during ...
As explained in the review of Log Scales, cluster headaches are some of the most painful experiences people can have in life. If a $5 DMT Vape Pen produced at scale is all it takes to fully take care of the problem for people sufferers, this stands to be an Effective Altruist bargain.
In the future, I would love to see more analysis of this sort. Namely, analysis that look at particular highly painful conditions (the "pain points of humanity", as it were), and identify tractable, cost-effective solutions to them. Given the work in this area so far, I expect...
I thought this post was great for several reasons:
- It generated ideas and interesting discussion about the definition of one of the most important ideas that the community has developed.
- I regularly referred back to it as "the key discussion about what Longtermism means". I expect if Will published this as an academic paper, it would've taken years to come out and there wouldn't be as much public discussion.
- I'm grateful Will used the forum to share his fairly early thoughts. This can be risky for a thinker like him, because it exposes him to publ...
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I like this piece.
This post offered concrete suggestions for increasing representation of women at EA events and in the movement as a whole. Before reading this, I thought of diversity-type issues as largely intractable, and that I had limited influence over them, even at the local level.
Immediately after reading this, I stopped doing pub socials (which was the main low-effort event I ran at the time). Over time, I pivoted towards more ideas-rich and discussion-based events.
There has been too little focus on nuclear risks compared to importance in EA, and I think that this post helps ameliorate that. In addition to the book itself, which was worth highlighting, the review also functions as a good review of how systemic incentives create issues up to and including existential risks, and allows us to think about how to address them.
As the author of this post, I found it interesting to re-read it more than a year later, because even though I remember the experience and feelings I describe in it, I do feel quite differently now. This is not because I came to some rational conclusion about how to think of self-worth vs instrumental value, but rather the issue has just kind of faded away for me.
It's difficult to say exactly why, but I think it might be related to that I have developed more close friendships with people who are also highly engaged EAs, where I feel that they genuine...
I still think this was a useful post. It's one of many posts of mine that seem like they were somewhat obvious low-hanging fruit that other people could've plucked too; more thoughts on that in other self-reviews here and here.
That said, I also think that, at least as far as I'm aware, this post has been less impactful and less used than I'd have guessed. I'm not actually aware of any instances where I know for sure that someone used this post to pick a research project and then followed it through to completion. I am aware of two other well-received...
(I'm the author)
Yep, I still endorse the post. It does what it says on the tin, and it does it well. Highest compliment I've received about it (courtesy of Katja): Good Judgment project guy got back to us [...] and also said, “And I just realized that your shop wrote a very smart, subtle review of Tetlock’s book Superforecasting a couple years ago. I’ve referred to it many times.”
I recently had an opportunity to reflect on how it influenced me and what if anything I now disagree with:
...Two years ago I wrote a deep-dive summary of Superforecasting a
This post did a really good thing to how I see the world's problems. So much of what's wrong with the world is the fault of no one. Encapsulating the dynamics at play into "Moloch" helped me change the way I viewed/view the world, at a pretty fundamental level.
I still think this post was making an important point: that the difference in cause views in the community was because the most highly engaged several thousand people and the more peripheral people, rather than between the 'leaders' and everyone else.
There is still little writing about what the fundamental claims of EA actually are, or research to investigate how well they hold, or work to communicate such claims. This post is one of the few attempts, so I think it's still an important piece. I would still really like people to do further investigation into the questions it raises.
I thought this post was particularly cool because it seems to be applicable to lots of things, at least in theory (I have some concerns in practice). I'm curious about further reviews of this post.
I find myself using the reasoning described in the post in a bunch of places related to the prioritization of longtermist interventions. At the same time, I'm not sure I ever get any useful conclusions out of it. This might be because the area of application (predicting the impact of new technologies in the medium-term future) is particularly challenging. (...
This post represents the culmination of research into the severity of the risks of nuclear war. I think the series as a whole was very helpful in figuring out how much the EA movement should prioritize nuclear risk and whether nuclear risk represented a true existential risk. Moreover, I think this post in particular was a great example of how there can be initial errors in analysis and how these errors can be thoughtfully corrected.
Disclaimer: I am co-CEO at Rethink Priorities and supervised some of this work, but I am writing this review in a personal ca...
This post represents the various team opinions on invertebrate sentience (including my own) and I think was a great showcase of how people's opinions looking at the same information can differ and how to present this complexity and nuance in an approachable way. I also think it continued to help make the case that invertebrate welfare is worth taking seriously and that this case was made in a way that was credible and taken seriously by the EA movement.
Disclaimer: I am co-CEO at Rethink Priorities and supervised some of this work, but I am writing this rev...
This post establishes a framework for comparing the moral status of different animals and has started a research program within Rethink Priorities that I think will eventually lead to a large and meaningful reprioritization of resources within the EA movement that accounts for much more accurate and well thought out views of how to prioritize within animal welfare work and how to prioritize between human-focused work and nonhuman-focused work.
Disclaimer: I am co-CEO at Rethink Priorities and supervised some of this work, but I am writing this review in a p...
I appreciated this post as culmination of over a year of research into invertebrate sentience done by a team at Rethink Priorities that I was a part of. Prior to doing this research, I was pretty skeptical that invertebrates were of moral concern and moreover I was skeptical that there even was a tractable way to figure it out. Now, we somehow managed to make a large amount of forward progress on figuring out this thorny issue and as a result I believe invertebrate welfare issues have a lot more forward momentum both within Rethink Priorities and elsewhere...
This post is concise and clear, and was great for helping me understand the topics covered when I was confused about them. Plus, there are diagrams! I'd be excited to see more posts like this.
[Disclaimer: this is just a quick review.]
As the Creative Writing Contest noted, Singer's drowing-child thought experiment "probably did more to launch the EA movement than any other piece of writing". The key elements of the story have spread far and wide -- when I was in high school in 2009, an English teacher of mine related the story to my class as part of a group discussion, years before I had ever heard of Effective Altruism or anything related to it.
Should this post be included in the decadal review? Certainly, its importance is undisputed. If anything, Singer's es...
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I think this piece is a nice short summary of one of the very most core principles of EA thinking.
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
Seems like a nice short summary of an important point (despite its current karma of 2!)
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I really like the direct, personal, thoughtful style of this talk, and would like to see more posts like it. Seems like maybe one of the best intros-of-this-length to the reasons for working on AI alignment.
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I think this is a nice summary of some important community norms
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
[COI I helped fund this work and gave feedback on it.]
I think this is one of the best public analyses of an important question.
[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]
I think that this is a nice visual, metaphorical, introduction to some economics/cause prioritization theory.
Looking back, I think it's trying to do a lot and so doesn't have a single clear point in the way that the most compelling pieces do. I could imagine it being pretty good cut into separate sections and referenced when the appropriate concept comes up.
This post introduced the "hinge of history hypothesis" to the broader EA community, and that has been a very valuable contribution. (Although note that the author states that they are mostly summarizing existing work, rather than creating novel insights.)
The definitions are clear, and time has proven that the terms "strong longtermism" and "hinge of history" are valuable when considering a wide variety of questions.
Will has since published an updated article, which he links to in this post, and the topic has received input from others, e.g. this critique f...
Since I originally wrote this post I've only become more certain of the central message, which is that EAs and rationalist-like people in general are at extreme risk of Goodharting ourselves. See for example a more recent LW post on that theme.
In this post I use the idea of "legibility" to talk about impact that can be easily measured. I'm now less sure that was the right move, since legibility is a bit of jargon that, while it's taken off in some circles, hasn't caught on more broadly. Although the post deals with this, a better version of this post might...
In the past year I have seen a lot of disagreement on when cultivated meat will be commercially available with some companies and advocates saying it will be a matter of years and some skeptics claiming it is technologically impossible. This post is the single best thing I have read on this topic. It analyses the evidence from both sides considers the rate of technological progress that will be needed to lead to cultivated meat and and makes realistic predictions. There is a high degree of reasoning transparency throughout. Highly recommended.
Some quick self-review thoughts:
Very enlightening and useful post for understanding not only life sciences, but other areas of science funding as well.
These investigations on suffering intensity challenge widely accepted beliefs that seem to be very convenient about how other species experience suffering.
This line of study seems more native and dependent on EA than others. It may be a major achievement of the Effective Altruism movement.
This particular paper brings to attention the idea that different creatures can experience time differently.
This idea is both really obvious and also weird and hard to come up with. I think that is common of many great ideas and contributions.
This article affected me a lot when I first read it (in 2015 or so), and is/was a nontrivial part of what I considered "effective altruism" to mean. Skimming it again, I think it might be a little oversimplified, and has a bit of a rhetorical move that I don't love of conflating "what the world is like' vs "what I want the world to be like."
Still, I think this article was strong at the time, and I think it is still strong now.
Like NunoSempere, I appreciate the brutal honesty. It's good and refreshing to see someone recognize that the lies in the thing that a) their society views as high-status and good and b) they personally have a vested interest in believing is really good.
I think this is an important virtue in EA, and we should applaud it in most situations where we see it.
This post (and also chapter 2 of Doing Good Better, but especially this post) added "we're in triage" to my mental toolbox of ways to frame aspects of situations. Internalizing this tool is an excellent psychological way to overcome forces like status quo bias (when triage is correct), and sometimes an excellent way to get people to understand why we sometimes really ought to prioritize doing good over making our hands feel clean.
I would guess that this post would be even better if it was more independent of the podcast episode.
This is an emotionally neutral introduction to the thinking about solving global issues (compared to, for example this somewhat emotional introduction). The writing uses an example of one EA-related area while does not consider other areas. Thus, this piece should be read in conjunction with materials overviewing popular EA cause areas and ways of reasoning to constitute an in-depth introduction to EA thinking.
Corporate campaigns have been a large part of the "effective animal advocacy" movement since they exploded in funding and effort in 2015. Given the prominence, it was strongly worth investigating whether corporate campaigns were worth the prior investment and - more importantly - would be worth the continued marginal investment into the future. This review established that corporate campaigns actually seem to have been a strong success.
I also think this post demonstrates a strong and thoughtful approach to cost-effectiveness estimation that served as a tem...
An introductory reading list on wild animal welfare that covers all the important debates:
The post was part of a 2018 series, the Wild Animal Welfare Literature Library Project.
Wild animal welfare has increased in prominence since then, e.g. Animal Charity Evaluators has regularly identified wild animal welfare as a key cause area.
I regularly refer back to this piece when thinking about movement-building or grants in that space. It provides a lot of really thoroughly-researched historical evidence as well as clear insight. It's a shame that it only has a few karma on the forum - I wouldn't want that to cloud its significance for the decade review.
Writing something brief to ensure this gets into the final stage - I recall reading this post, thinking it captured a very helpful insight and regularly recalling the title when I see claims based on weak data. Thanks Greg!
For me, and I have heard this from many other people in EA, this has been a deeply touching essay and is among the best short statements of the core of EA.