Context
I've just finished going through the EA Handbook as part of the Introductory EA Program. My experience with EA predates this. I'd like to present some of my independent impressions and gut reactions to the ideas I came across in the handbook.
My post is intended to be low effort, i.e. my ideas here haven't gone through any rigorous thought and I'd love to receive any kind of feedback. I am totally open to hearing that my arguments are rubbish and why.
My takes are mainly targeted at the material I encountered during the process of going through the EA Handbook, even if this ignores other knowledge/experiences I have about EA predating this. As a side effect, it may appear that I'm criticizing the EA Handbook as a weak link, whereas my intention is just to express my spiciest takes in hopes of receiving feedback that I can efficiently update my thoughts on. (I.e., I don't know what the steel man versions are, assume I am poorly read but curious.)
Impression 1: Maximization (#rant)
My strongest reaction to the EA Handbook is against the idea of maximization of good, which I feel is presented as a thing across chapters 1-3. If not for the final post "You have more than one goal, and that's fine", I could easily get the impression that maximization is a core belief and that EA might be encouraging people to take it too far.
My reactive stance to what felt like an aggressive thrust looks like this:
- I am against maximization. (I'm not a utilitarian, but even if I were, I would still retain some arguments against maximization.)
- I am against rationality being a top priority (e.g., doing good being #1 and trying to do more good by relying on rationality being #2). I think it is fundamentally unhealthy for most people to try and live by.
- I am against all "should"s. I would disagree with a statement such as "EAs should update their beliefs if provided with decisive evidence against their beliefs", or the idea that being/acting illogically is wrong/bad/not-EA.
- I am against the assertion that the world is a bad place, even if it is the case that there are moral catastrophes happening all around us. To assert that "things aren't good enough" seems to be a slippery slope. There is a natural parallel to psychology about long-term improvement being impossible without self-acceptance. Example of the dangerous slope: "we're not doing enough because there are still moral catastrophes". After reducing moral catastrophes in the world by 99%: "we're not doing enough because there are still moral catastrophes, and we can still do much better". Something something Aristotelian idea that if something can reach a good potential state then it was already good to begin with?
- I personally feel that the EA Handbook goes too far in hinting that making suboptimal decisions is not-very-EA. I believe this viewpoint is unjustified on a logical level (as well as emotional). Firstly, in real life we don't have trade-off decisions where we know accurately that option 1 has a net expectation of saving 50 lives and option 2 has a net expectation of saving 100 lives, with "all else being equal". All else is never equal, not in the spillover effects, nor in the way making that decision affects us personally. Even if we came to a real-life scenario that appeared on paper to be exactly like such a scenario, there is some percentage of the time where making the seemingly better decision backfires because our understanding of the two systems was inadequate and our estimates which seemed completely logical were wrong due to a blind spot. Secondly, choosing the suboptimal decision might lead us to faster updating of our global decision-making than always choosing the better option on paper. Mistakes are necessary for learning. I believe even AI superintelligence would not be exempt from this. Making a mistake (or doing an experiment) can be a local suboptimal that's part of a global optimum.
- I support the idea of individuals having a "moral budget" and that we have an absolute right to decide how big our moral budget is and what to spend it on.
- Utilitarianism asks too much of us. What would the world look like if everyone acted like the person who donated their kidney on the basis that their life isn't worth that of 4000 strangers? It isn't obvious to me that life would be superior or even not-disastrous if everyone applied that literally everywhere in their lives.
- If everyone on earth suddenly became a hard-core (maximizing) EA, the living population would be far more miserable in the short-term than if we suddenly became soft (non-maximizing) EAs. Although both scenarios could hypothetically lead to balancing out to the optimum in the long run, I would argue that the latter population would reach the optimum quicker 100% of the time, all else being equal.
To tie in these points in a maybe-coherent way, my reasoning against maximization is that:
- Maximization is not a viable/sustainable/satisfying way to live life on a well-being level. And if we want to do a lot of good for a long time, maybe we don't want to promote ways of life that aren't sustainable. EA isn't life itself and having boundaries is healthy. (Counter-argument against myself though, I think there is a case that some individuals can do significantly more good by sacrificing some of their individual happiness, and that this maximizes their value preference.)
- Maximization isn't necessarily morally superior to non-maximization at the individual level, and making the assertion that maximization is superior would be considered at odds with healthy psychology in most modern frameworks. For example, under the NVC (Nonviolent Communication) model, the non-violent way to promote maximization would be to describe it as merely a strategy that suits a particular value judgment (or value preference). To this end, I would say that maximization is currently communicated in a violent way in at least two places within the EA Handbook.
- Even if we assume that maximization is fundamentally correct, human beings aren't currently capable of computing objective functions with millions of variables, let alone gathering the data for it within a reasonable time frame for a single evaluation of that objective function to be applicable. In other words, if we could somehow operate as 100% rational beings while having similar cognitive abilities as we do now, it would still be rational to live non-rationally. And of course we aren't rational beings and therefore it is even more okay to live non-rationally. If one day we could perform useful and instant computations for maximization, such as with the help of superintelligence, then perhaps the correct way to live as human beings would shift. However, at that point it would seem that superintelligence or other forms of engineering could do all the rationally correct things for us, in which case it's not clear that it would be optimal for us to exist as a species by that time.
- The "moral budget" concept is more like a simplification, an approximation strategy that takes into account the fact that we have limited ability to calculate things and that we generally do not know our exact sustainable limits for tolerating constant change or discomfort in the name of moral pursuits.
Impression 2: What is EA?
I would describe as a movement and a community. One of the questions posed was "Movements in the past have also sought to better the state of altruistic endeavors. What makes EA fundamentally distinct as a concept to any altruistic movement before now or that might come after?" My answer is "Maybe nothing, and that's okay."
My tentative definition of EA would currently boil down to something like "EA is a movement that promotes considering the opportunity cost of our altruistic efforts." This seems like such a "small" definition and yet I can't find immediate fault with it.
Disclosure: I consider myself an EA and I have a light interest in my personal definition of EA not ruling me out as one.
Impression 3: Equality and Pascal's mugging
...we should make it quite clear that the claim to equality does not depend on intelligence, moral capacity, physical strength, or similar matters of fact. Equality is a moral idea, not an assertion of fact. There is no logically compelling reason for assuming that a factual difference in ability between two people justifies any difference in the amount of consideration we give to their needs and interests. The principle of the equality of human beings is not a description of an alleged actual equality among humans: it is a prescription of how we should treat human beings.
Chapter 3 makes strong claims about equality in a way that seem to come out of nowhere and also be logically contradicted in disturbing ways by Chapter 4 (existential risks).
If:
- all humans have equal rights and equal moral deserts, and
- we are consequentialists trying to maximize fulfillment of that, and
- we estimate there will be 100 trillion lives if there is never a human-caused extinction event, and
- we believe the average utility of human lives to remain positive and broadly comparable to the current utility, then:
almost any moral crime (such as wiping out 99% of the current population) that reduces the risk of human extinction by 0.01% can be justified in the name of equal right to life. Equal deserts seems to lead to extremely non-equal treatment in the present. Many generations after us can also keep making the same justifications.
Even without inducing Pascal's mugging type x-risk assumptions, we still run into logical ideas of justified discrimination. If everyone has equal rights but some people can save/improve millions of lives rather than just the hundreds estimated from donating 10% of lifetime earnings, isn't the best way to promote equal rights ironically by enabling people who can get us closer to being able to sustain everyone while neglecting the currently neglected who are less likely to be able to help us reach that state?
There are even more Pascal's mugging dilemmas with digital minds or AI superintelligence, and I feel skeptical that we should embrace being mugged.
Weird relief from Pascal's mugging
There actually appear to be some weird benefits of Pascal's mugging in the trillions of future lives scenarios. One of them is that almost nothing we do in this century makes a dent in the total scale of utility of human lives, so long as we don't destroy that long-term potential. Therefore, even if we do happen to exist in the most pivotal century of all past and future time in the universe, moral catastrophes are pretty insignificant and even the timeline of progress is insignificant (all else being equal). Heck, if we had reached the industrial age 1000 years later than we did, almost no potential would have been lost, at least on a percentage basis.
Here's another one. If we will inevitably reach a state where resources are effectively limitless, then at that point we can naturally or artificially create billions of lifespans that are kept perpetually happy. It could be billions of rats on drugs, humans on happy drugs and zero suffering (is this objectively better than more balanced alternative experiences?), digital minds that are kept happy and live on for hundreds of thousands of years? Again, this seems like the opposite of the utility monster, where temporary goodness or badness seems insignificant as long as we end up reaching this capability of mass-producing utility.
Impression 4: Suffering and sentience
This was a pretty interesting and confusing topic to think about for me. Here are some random trinkets:
- If we become sure that we're living in a simulation and that the simulation creators are sentient, then hypothetically that can relieve a ton of moral responsibility from us, as well as free us from all sorts of Pascal's mugging scenarios. If we become extinct, it probably doesn't affect universal utility at all. Our simulators can probably simulate an order of magnitude more minds even if we do count as a form of digital mind with moral value.
- There was mention of a charity postulating that even low-intelligence computer algorithms running on a laptop could be suffering on a non-negligible scale. This seemed pretty far out there to me at first, but as I thought about it, this conclusion can arise as plausible on the basis of just one or two ideas that aren't ridiculous at all. The first idea is some variant of the computational theory of mind. If human consciousness is a computational system then it seems inevitable that it's possible to produce genuine consciousness on non-human hardware. Turing completeness would imply that this consciousness can be implemented on a laptop. The second leap is one of two options. If the hardware of the laptop is fundamentally capable of sustaining consciousness, who's to say it isn't already conscious or suffering, but simply in a way that we don't understand/perceive/detect? The second possible leap is: if human minds are computers and human suffering is "morally real" then human suffering is a form of algorithmic suffering. Maybe there are other forms of algorithmic suffering even if consciousness isn't present, or if non-human forms of consciousness that we don't recognize are present.
- If aliens with consciousness exist, they might have very different concepts of morality to us. How do we know that their concepts won't be superior to ours? Should that diminish our opinion of the importance of our own existence and moral weighting (e.g., do we become far closer to other animals)? These exact same ideas apply to AI superintelligence. We are worried about AI misalignment, but is this because we're being selfish with our value preferences, or because we believe for a fact that our perception of value preferences will always be closer to a hypothetically existent "true universal utility function" than even superintelligence? Suppose we achieve conscious superintelligence but also discover that AI/human alignment is categorically impossible, at least in that first form of superintelligence. That conscious superintelligent AI would have its own value preferences and perhaps its own sense of morality. Would humanity ever be able to willingly accept its version of morality over our own? Would we allow it to teach us a concept which might violate the natural instincts of our natural biological hardware?
- Malevolent AI putting us in unbreakable human slavery S-risk scenarios. I am skeptical about this as a real concern. I have some creative arguments against it but I don't think my arguments are logically robust at all :)
Impression 5: Animal welfare
I found the content on animal welfare unexpectedly mild and not very challenging. This seems in stark contrast to the perspectives and actions I see within the EA community. I'd be interested to hear suggestions on beginner-friendly reading material that makes a more compelling case. Basically I'd like to know why veganism is so common in EA and why I often hear the hand-waving suggestion that going vegan probably makes more of an impact than other things in EA. In case I'm missing out, as it were.
Closing remarks
All in all, I got a lot of value from the EA Handbook. There are many things that can be improved about it, but if I had to pick two:
- The "editing" quality (of course it's not exactly a book, but the cohesion, continuation, consistency) was one of the lowlights for me. There are many editing issues, and the inaccurate time estimates especially were a cause of frustration. Things feel out of date even if they're not.
- I would have liked to see a post providing a balanced example of someone applying EA principles to their life over several years. Given the intensity of the ideas being presented, I think it would useful to include a healthy example of someone chipping away at things, maybe taking one or two years to explore a topic of interest, engaging with various EA resources or people to steer their journey, eventually taking a donation pledge, etc etc. It's nice to convey the message that everyone has their own pace and that's okay. That taking EA seriously doesn't mean that it has to be a sprint.
Thanks for writing this post Victor, I think your context section represents a really good and truth-seeking attitude coming into this with. From my perspective, it is also always good to have good critiques of key EA ideas. To respond to your points:
1 and 2. I agree that the messaging about maximisation has the danger of people taking it too far, but I think it is quite defensible as an anchor point. Maybe this should be more present in the handbook, but I think it is worth initially saying that >95% of EAs' lives don't look like some extreme naive optimiser per your framing.
I think I see EA more as "how can we do the most amount of good you can do with X resources", where it is up to you to determine X in terms of your time, money, career etc. When phrases begin with "EAs should", I generally interpret that as "If you are wanting to have more impact, then you should". I think the moral demandingness aspect is actually not very present in most EA discourse, and this is likely best for ensuring a healthy community.
EAs are of course human too, and the community from what I have seen of it is generally very supportive of people making decisions that are right for themselves when necessary (eg. career breaks, quitting a job which was very impactful, changing jobs to have kids etc - an example (read the comments)). Even if you are a "hard-core utilitarian", then I think placing some value on your own happiness, motivation etc is still good for helping you achieve the best you can. Most EAs live on quite healthy salaries, in nice work environments, with a supportive community - while I don't deny that there are also mental health issues within the group, I think EA as a movement thus far hasn't caused many people to be self-sacrificial to the point of being detrimental to their wellbeing.
On whether maximisation is a good goal in the first place; the current societal default in most cases of altruistic work is to not consider optimisation or effectiveness at all. This has led to huge amounts of wasted time and money, which has by extension allowed massive amounts of suffering to continue. While you're subpoint 5 about uncertainty is true, I think EA successes have proved the the ability to increase the expected impact you have with careful thought and evidence, hence the value EA has placed on rationality. Of course people make mistakes and some projects aren't successful or even might be net negative, but I think it is reasonable to say that the expected value of your actions is what is important. If you buy that the effectiveness of interventions is roughly heavy-tailed, then you should also expect that the best options are much better than the "good" ones, and so it is worth taking a maximisation mindset to get the most value.
I don't think saying "the world is a bad place" is a very useful or meaningful claim to make, but I think it is true that there is just so much low-hanging fruit still on the table for making it so much better, and that this is worth drawing attention to. People say things like the world is bad(which could be done in a better way) because honestly a lot of the world just doesn't care about massive issues like poverty, factory farming, or threats from eg. pandemics or AI, and I think it is somewhat important to draw attention to the status quo being a bit messed up.
3. Ah your initial point is a classic argument that I think targets something no EA actually endorses. I think moral uncertainty and ideas of worldview diversification are highly regarded in EA, and I think everyone would immediately disregard acts that cause huge suffering today in the hope of increasing future potential, for both moral and epistemic uncertainty reasons.
I think your points regarding the insignificance of today's events for humanity's long-term seem to rely heavily on a view of non path dependency - my guess is that how the next couple of centuries go on key issues like AI, international coordination norms, factory farming, and space governance, could all significantly affect the long-term expected value of the future. I think ideas of hinginess are good to think about for this, see here: Hinge of history - EA Forum (effectivealtruism.org).
4. I agree it is generally a confusing topic and don't have anything particularly useful to say besides wanting to highlight that people in the community are also very unsure. Fwiw I think most S-risk scenarios people are worried about are more to do with digital suffering/astronomical scale factory farming. I think human-slavery type situations are also quite unlikely.
Thanks for the clarification about how 1 and 2 may look very different in the EA communities.
I'm not particularly concerned about the thought that people might be out there taking maximization too far, the framing of my observations is more like "well here's what going through the EA Handbook may prompt me to think about EA ideas or what other EAs may believe.
After thinking about your reply, I realized that I made a bunch of assumptions based on things that might just be incidental and not strongly connected. I came to the wrong impression that the EA Handbook is meant to be the most canonical and endorsed collection of EA fundamentals.
Here's how I ended up there. In my encounters hearing about EA resources, the Handbook is the only introductory "course", and presumably due to being the only one of its kind, it's also the only one that's been promoted to me via over multiple mediums. So I assumed that it must be the most official source of introduction, remaining alone in that spot over multiple years, seeing it bundled with EA VP also seemed like an endorsement. I also made the subconscious assumption that since there's plenty of alternative high quality EA writing out there, as well as resources put into writing, that the Handbook as a compilation is probably designed to be the most representative collection of EA meta, otherwise it wouldn't still be promoted the way it has been to me.
I've had almost no interaction with the EA Forum before reading the Handbook, so very limited prior context to gauge how "meta" the Handbook is among EA communities, or how meta any of its individual articles are. (Which now someone has helpfully provided a bunch of reading material that is also fundamental but while having quite different perspectives.)