Hide table of contents

[EDIT (2024-03-15): I changed the original title from "There are no massive differences in impact between individuals" to "Critique of the notion that impact follows a power-law distribution". More explanation in this footnote[1].]

[EDIT (2024-03-27): Since publishing this essay, I have somewhat updated my views on the topic. I continue to endorse the majority of what is written below, though I would no longer phrase some of the conclusions as strongly/decisively as before. I have decided to leave the text largely as originally published, except for a modification in the Summary (clearly marked). For a more detailed account of my belief updates, alongside links to comments and external resources that prompted and informed my reconsiderations, see the Appendix.]


"It is very easy to overestimate the importance of our own achievements in comparison with what we owe others."

- attributed to Dietrich Bonhoeffer, quoted in Tomasik 2014(2017)

Summary

  • In this essay, I argue that it is not always useful to think about social impact from an individualist standpoint.
  • [This bullet point was edited on 2024-03-27: the original text is in strikethrough, followed by the version I now endorse; see Appendix for more details]
    I claim that there are no massive differences in impact between individual interventions, individual organisations, and individual people, because impact is dispersed across I argue that the claim that there are massive differences in impact between individual interventions, individual organisations, and individual people is complicated and possibly problematic, because impact is dispersed across
    • all the actors that contribute to the outcomes before any individual action is taken,
    • all the actors that contribute to the outcomes after any individual action is taken, and
    • all the actors that shape the taking of any individual action in the first place.
  • I raise some concerns around adverse effects of thinking about impact as an attribute that follows a power law distribution and that can be apportioned to individual agents:
    • Such a narrative discourages actions and strategies that I consider highly important, including efforts to maintain and strengthen healthy communities;
    • Such a narrative may encourage disregard for common-sense virtues and moral rules;
    • Such a narrative may negatively affect attitudes and behaviours among elites (who aim for extremely high impact) as well as common people (who see no path to having any meaningful impact); and
    • Such a narrative may disrupt basic notions of moral equality and encourage a differential valuation of human lives in accordance with the impact potential an individual supposedly holds.
  • I then reflect on the sensibility and usefulness of apportioning impact to individual people and interventions in the first place, and I offer a few alternative perspectives to guide our efforts to do good effectively.
  • In the beginning, I give some background on the origin of this essay, and in the end, I list a number of caveats, disclaimers, and uncertainties to paint a fuller picture of my own thinking on the topic. I highly welcome any feedback in response to the essay, and would also be happy to have a longer conversation about any or all of the ideas presented - please do not hesitate to reach out in case you would like to engage in greater depth than a mere Forum comment :)!

Context

I have developed and refined the ideas in the following paragraphs at least since May 2022 - my first notes specifically on the topic were taken after I listened to Will MacAskill talk about “high-impact opportunities” at the opening session of my first EA Global, London 2022. My thoughts on the topic were mainly sparked by interactions with the effective altruism community (EA), either in direct conversations or through things that I read and listened to over the last few years. However, I have encountered these arguments outside EA as well, among activists, political strategists, and “regular folks” (colleagues, friends, family). My journal contains many scattered notes, attesting to my discomfort and frustration with the - in my view, misguided - belief that a few individuals can (and should) have massive amounts of influence and impact by acting strategically. This text is an attempt to pull these notes together, giving a clear structure to the opposition I feel and turning it into a coherent argument that can be shared with and critiqued by others.

Impact follows a power law distribution: The argument as I understand it

[T]he cost-effectiveness distributions of the most effective interventions and policies in education, health and climate change, are close to power-laws [...] the top intervention is 2 or almost 3 orders of magnitude (i.e. a factor 100 and almost 1000) more cost-effective than the least effective intervention.” - Stijn, 2021

 

If you’re trying to tackle a problem, it’s vital to look for the very best solutions in an area, rather than those that are just ‘good.’ This contrasts with the common attitude that what matters is trying to ‘make a difference,’ or that ‘every little bit helps.’ If some solutions achieve 100 times more per year of effort, then it really matters that we try to find those that make the most difference to the problem.” - Todd, 2021(2023)

My understanding of the narrative that I disagree with and seek to tackle in this essay is as follows. First, the basic case for the impact maximisation imperative/goal (which I am broadly sympathetic to, though with some reservations on points e-g):

  • a) There are many problems in the world - poverty and ill-health, mistreatment of non-humans, war and violence, natural catastrophes, and risks related to the climate crisis, pandemics, weapons of mass destruction, and rapid technological advancements, to name just a few.
  • b) It would be better (by the standards of a wide range of different ethical beliefs and worldviews[2]) if there were fewer problems, or less severe problems.
  • c) There are things people like us could do to reduce the severity of many of these problems,
  • d) but we cannot solve all of them immediately.
  • e) If possible, it is better to focus our energies on the actions that will lead to the greatest reduction in problem severity across the range of problems we care about.
  • f) We have the tools and information to roughly estimate the severity of different problems, the success of past actions in addressing these problems, and the success chances of future actions.
  • g) Conclusion: We should use tools and information at our disposal to figure out which actions, or sets of actions / strategies, are most promising when it comes to reducing a large share of the problems our world faces, and we should then take those actions.

In addition to this, people in effective altruism, or in the impact maximisation space more broadly, have latched on to the idea that there are huge potential differences in the value (success in reducing global problem severity) of different approaches to doing good. This is often described as a “power law distribution of impact”, illustrated with something like Figure 1, and driven home by numerical claims such as “the top intervention is 2 or almost 3 orders of magnitude (i.e. a factor 100 and almost 1000) more cost-effective than the least effective intervention” (Stijn, 2021).

Figure 1: Screenshot taken from Todd, 2021 (2023), “The Best Solutions Are Far More Effective Than Others”, 80,000 Hours.

As far as I can tell, the “power law claim” first emerged explicitly in the field of global health and development, based on a study by Ord (2013). It has been reexamined and somewhat moderated (Todd 2023), but the core message - some interventions are magnitudes more promising than others - was retained and even extended to other domains: from education, social programmes, and CO2 emissions reductions policies to efforts to change habits of meat consumption and voter turnout (Todd 2021(2023)). From anecdotal/observational evidence, the idea also seems common when people talk about approaches for reducing existential and global catastrophic risks, whether they are focusing on artificial intelligence, bioengineering and bioterrorism, nuclear weapons, or other related risks. Likewise, I have encountered the notion of widely differing impact potential in discussions around political/policy work more generally.

This seems off…

There are numerous value-laden and controversial assumptions in the basic case for impact maximisation that I laid out above, but I will not tackle these directly here (suffice it to say that I broadly agree with claims a-d, and have reservations about e-g, though I do think they have some validity). Instead, I focus on the specific claim that there are huge (orders of magnitude) differences in the value of different attempts to do good.

I agree that there is a massive difference between actions/strategies that are net-negative, neutral, and net-positive; in other words, I do agree that it is really important to figure out whether an action or strategy actually contributes to solving a problem at all, and whether it may cause unintended negative effects. However, I do not agree that there are vast differences in value among those actions and strategies that have crossed the bar of having a significant positive impact on the world; and I do not agree that some individuals are orders of magnitude more important for making the world better than everyone else.

In addition, I believe that an excessive focus on “securing high impact opportunities” is unhelpful to our collective effort of making the world a better place. I am slightly confused about whether that latter criticism is better framed as an empirical disagreement (about the consequences of employing the “power law distribution of impact” narrative), a normative concern (about the values and ideas that are conveyed by that narrative), or a conceptual issue (about the appropriateness and usefulness of the perspective as a model to understand the world). I have tried to tease out the specific points I take issue with, and I’m curious to hear in comments and feedback whether the write-up that follows seems coherent or whether you think I’m turning in circles and should have chosen a different categorisation.

Empirical problem: Impact results from numerous inputs and thus cannot be attributed straightforwardly to any one action

No one changes the world alone, and no one doesn't change it at all.” - Hank Green (2017)

For most conceivable outcomes, it seems misleading to suggest that they are the result of one action, one individual, or one organisation. Impact does not seem to be a property that can sensibly be assigned to an individual. If an individual (or organisation) takes an action, there a number of reasons why I think that the subsequent consequences/impact can't solely be attributed to that one individual:

  1. The action's consequences will depend a lot on how other people act, and on other features of the environment. Impact is thus the result of collective (often long-time) efforts, diffused in an unspecifiable manner across numerous individuals.

    For instance, the discovery of the polio vaccine by itself doesn't do much good. The impact it has depends on how the vaccine is distributed and administered, which depends on numerous state and private actors. It also depends on popular attitudes towards the vaccine (willingness/eagerness to make use of it), which depend on probably even more actors and environmental factors. I have great respect and gratitude for the people who produced the polio vaccine for the first time; but I don't think it makes sense to take the number of lives saved/improved and stuff all of that into their individual impact scorecard. If we must attribute quantified individual credit to the inventors of the vaccine (and I will argue below that probably we shouldn’t try to in the first place), it will have to be much smaller than the total good that resulted from the vaccine, as that total volume of impact would need to be distributed across all those actors influencing the trajectory of the vaccine’s administration (and across the people who influenced the developers themselves and their enabling environment, see points 2) and 3) below).
  2. Whether or not an action can be taken in the first place depends on the environment and on the actions of other people.

    For instance, most people will probably put the US president into the category “high-impact individual.” And there are certainly many impactful things the president can do that are not accessible to most other people. But for achieving most goals that can be said to really matter - a better healthcare system that actually improves wellbeing in the country; effective technology policy that actually reduces risks and/or advances life-improving technological developments; a sufficient response to the climate crisis; etc. - presidents themselves will tell you how incredibly constrained they are in bringing about these outcomes. The impact a president can have through sensible policies is determined by the actions of many other individuals (domestically and internationally), and it is also determined by the culture he or she operates in (the ideas that are considered normal, palatable, or even just conceivable). Yes, this individual can have an impact through their actions, but only in conjunction with the actions of many others. If we tried to account for all individuals that form part of the president’s enabling infrastructure (again, I will argue below that we probably can’t and should not try), I am sceptical whether the individualised impact that remains with the president’s actions truly is orders of magnitude higher than that of many other people.
  3. An individual’s decision to take an action is influenced by numerous factors, all of which partake in the impact that results from the decision.

    The individual taking an action is not a blank slate. She herself is influenced by numerous factors, including other people. Arguably, the impact from an individual's action can also be attributed to those people who influenced the individual into taking the action in the first place. The number of influencing factors is usually huge - from parents and others contributing to the individual's education/socialisation to friends and colleagues to people who are strategically trying to influence the individual all the way to culture, artists and storytellers who shape the ideas and intuitions that determine what seems sensible and normatively desirable to the individual.

All of this leads me to doubt whether impact can be empirically attributed to individuals at all, which I will try to flesh out in more detail below. Importantly for this section, it also leads to me suspect that, if an individualised impact attribution were brute-forced in spite of the empirical and conceptual difficulties, the eventual impact left with any one individual would not be a massive outlier (orders of magnitude apart from everybody else), because impact would be dispersed across a) all the actors that contribute to the outcomes before and after any individual action is taken, and across b) all the actors that shape the taking of any individual action in the first place.

Adverse effects: Does this idea/perspective encourage elitism, anti-democratic sentiment, and a differential valuation of human lives?

One of the concerns I have with the narrative that impact follows a power law distribution is that I’ve seen this idea used as a (strong) argument to discourage individuals from taking actions that I think should be considered highly valuable. More specifically, I have the impression that talk about “high impact opportunities” often (though not necessarily) goes hand-in-hand with a rather naive consequentialist attitude that pays attention mainly to direct, and often easily measurable, effects (discussed, among others, in Mandefro 2023). For instance, jobs that I would consider vital for a resilient society and an effective collective effort to make the world better - thoughtful teachers and social workers, caring doctors and nurses, responsive and diligent civil servants -, are often discounted by those people who claim that individuals can and should aim to achieve magnitudes more in terms of positive impact than the average person (e.g., Lewis 2015). Likewise, a focus on maximising the impact of individual actions can easily lead to the discounting of acts that are meaningful only if performed persistently by a large group of people - casting a vote, participating in protests, signing petitions, engaging in dinner table conversations, and so on. If I am right that impact does not actually follow a power law distribution and that the jobs & acts just mentioned are actually much closer in importance to the “highest possible life paths” than suggested, this type of discouragement based on misleading claims seems at least mildly counterproductive for the goal of making the world better.

Relatedly, I am concerned that the belief in high impact differentials, and a concomitant neglect for the (indirect, longrunning) impact of small actions, can lead individuals to disregard common-sense virtues and rules of behaviour if they stand in the way of a perceived “high impact opportunity.” While I certainly wouldn’t argue that common-sense morality is a sure guide to always doing “the right thing,” I do think that adopting a general disposition to (try to) act virtuously and to consistently - though not blindly - abide by certain behavioural heuristics (e.g., respect all people around you; don’t be disingenuous; be truth-seeking; etc. etc.) is warranted. If the pursuit of maximum impact leads to a complete[3] sidelining of considerations around how to be a reliable, trustworthy, and constructive member of your communities, I think you’re paying a cost that’s higher than you should be willing to accept.[4]

"There is nothing more corrupting, nothing more destructive of the noblest and finest feelings of our nature, than the exercise of unlimited power."

- William Henry Harrison (former US President)

Putting debates around the actual distribution of impact to the side, I am worried about normative side-effects that seem concerning even if the underlying narrative is an apt description of the world. I fully acknowledge that the side-effects described in the next few paragraphs are not strictly implied by the claim that there can be magnitudes of difference in the impact two individuals have. But I find myself with an intuition and a plausible-sounding narrative that links claims around “power law distributions of impact” to the normative concerns I’m about to discuss. If there is anything to that intuition and such a link does exist in practice (even if only weakly), then I would argue that this gives good reason to pause and reconsider the use and propagation of the “power law” narrative.

I am concerned that the idea of huge impact differentials between actions reinforces the belief that a few vanguard individuals can fix the world - or fix some of the major problems we see in the world. I am concerned that this vanguardism weakens demands for measures that would strengthen communities, improve democratic culture and practice, and enhance our collective action capabilities, because strong communities, democratic participation, and effective collective action seem less vital if we can hope for a small group of elites to contribute the vast majority of inputs towards “making the world better.”

I am also concerned that a belief in a small-ish group of individual saviours could lead to hubris (Effectiviology n.d.) and an unwarranted sense of confidence and control (Fast et al. 2009), lust for power (Weidner & Purohit 2009; more polemically: Walton 2022), or burnout among the supposed highest-impact individuals and to perceived disempowerment, helpless indifference, or angry resistance among those who seem to be left out from that select group (Harrison 2023). All of this seems quite corrosive to societal decision-making processes, both at an individual level (people’s cognitive abilities to make sound decisions are hampered by the emotions listed before) and at the collective level (people are less willing and able to work together constructively if they are arrogant, worn-out, hopeless, mindlessly angry, and/or deeply distrusting of their fellow citizens). It also just seems quite undesirable to live in a world full of self-absorbed elites on the one hand, and disillusioned common people on the other.

You’re probably among the 0.1% most highly productive individuals [...] You’re probably worth the same as three average EAs

- Anonymous effective altruist (conversation I overheard at an EA community event)

Lastly, I am concerned about implicit, or sometimes quite explicit, judgments about the value of individuals when comparing them based on impact metrics (illustrated, for instance, by Lipshitz 2018, full book pdf). I think that there is a danger of switching from “some actions are more important than others” to “some people - the ones best-able to perform high-impact actions - are more important and thus more valuable than others.” I think that this is a problem because it undermines very basic community principles around solidarity and mutual respect, which I believe are vital for individual flourishing as well as sustainable collective problem-solving capabilities. This seems to be a thorny problem whenever impact is measured and attributed to individuals, but I imagine it will be worse the larger we believe potential differences in impact to be.

Conceptual problem: If we want to actually see the emergence of a better world, individualised impact calculations are not the best path to action

Do your little bit of good where you are; it is those little bits of good put together that overwhelm the world.” - Archbishop Desmond Tutu

While examining the empirical validity of claims around the supposed “power law distribution of social impact,” I was growing increasingly sceptical about the goal of attributing impact scores to individual actions and individual people in the first place. As I was trying to argue that the results of any given action need to be distributed across a wide collection of contributing and enabling actions, I kept wanting to write (or shout at my screen): “But isn’t this a futile and misleading exercise? Impact simply isn’t concentrated in one individual. How can we even speak about the impact of one individual person in isolation, when their actions and their actions’ effects can only ever be observed in an interactive environment? Isn’t it fundamentally wrong-headed to think about achieving positive change in the world from an isolated, individualised standpoint?”[5]

The seemingly plausible (?) possibility of adverse effects - that an emphasis on high impact differentials may discourage bottom-up and collective action; that it may neglect and thus weaken the social fabric which we need as a backbone to and prime enabler of a thriving world - reinforces my sense that an emphasis on individual impact is not the best ingredient for an effective and sustainable movement strategy to make the world better.

As an alternative, I believe we - people trying to address pressing global problems - would do better to conceive of impact as the product of a large set of actions and people, not a quantity that can be apportioned sensibly to any one individual. We would probably benefit from a good deal of humility about the influence of any one action and from a steadfast (and simplicity-resistant) awareness of the complex environment we operate in. While analyses about the marginal benefits of two specific competing actions, roughly holding all other factors equal, may well retain their uses in some situations (see below), we might reconsider whether less atomised perspectives are better suited to guiding decision-making in other situations.

For instance, when finding myself in the possession of consequential information about a topic of public interest, maybe I should ask myself “What would I want the majority of other people to do in such a situation? Would I want them to retain their secret information to maximise their own level of influence within a certain sphere? Or would I want them to share the information publicly?” Or maybe, in deciding how to treat people around me, I should wonder which actions and attitudes make for a constructive collaborator, a reliable friend, or a respecting colleague, rather than calculating how best I can use my connections and “relationships” to catapult myself into the (possibly imaginary) category of super-high-impact-individuals - not (just) because there may be an intrinsic moral value in virtuous behaviour but also and especially because such virtuous behaviour may be one of the essential building blocks to achieving our optimal collective impact potential. And maybe, when choosing a job or making a career move, it is enough to ask “Which kinds of contribution do we as a community really need, which of those do we currently lack, and which am I well-placed to deliver?” rather than seeking to answer “Which position do I need to reach in order to be in the 99th percentile of an imaginary and narrowly-conceptualised global impact distribution?”[6]

Caveats and disclaimers

I wrote this essay primarily as a reflection on my own thoughts. As I outlined the caveats and uncertainties below, I did not necessarily have an audience in mind and some of the paragraphs that follow may sound a bit aimless / redundant. But I think, or hope, that these meta-considerations may give readers additional insights into my thinking on the topic, allowing them to engage more fruitfully with my thinking and with my preceding arguments.

I can’t claim credit: People, including self-identified effective altruists, have discussed all of this already

Many of the claims I make above have been articulated by other people, quite possibly in more eloquent and straightforward terms. In this essay’s spirit of not apportioning individual credit where individual credit is not due, I felt compelled to add this subsection to acknowledge and reference at least some of the many writers who have expressed these ideas before me (and who have influenced my thinking on the topic).

Many people have highlighted how complex dynamics make impact evaluations exceedingly difficult (to name just a few: Justis 2022, EA Forum; Karnofsky 2011(2016), GiveWell blog; Wiblin 2016, talk at an EA Global conference; Reddel 2023, EA Forum; Smith 2019, former GiveWell analyst; Kozyreva et al. 2019, “Interpreting Uncertainty: A Brief History of Not Knowing”; Griffes 2017, EA Forum), and some have challenged the notion that there are huge differences in impact between interventions or organisations (Tomasik 2017, EA Forum “classic repost”).

Several authors have warned of adverse effects from the quest for super-high-impact opportunities, describing how individualised impact models can often account for our inability to pursue collective action effectively (Srinivasan 2015; Chappell 2022; Jenn 2023, “2. Non-Profits Shouldn’t Be Islands"; Remmelt 2021, “5. We view individuals as independent”; Lowe 2023; Lowe 2016).

Lastly, the students of meta-ethics among you will probably have noticed that my description of alternative decision frameworks draws heavily on established ethical doctrines. In my example of what to do when acquiring secretive information that could either be used for private influence or released to the public, my question is clearly reminiscent of Kant’s Categorical Imperative (see Britannica n.d.)[7] and of rule-consequentialist arguments (e.g., Burch-Brown 2014). In my discussions around virtuous behaviour, I present a drastically shortened reproduction of the theory of “virtue consequentialism” (Bradley 2005), which has been endorsed in spirit though not in name by several “non-naive” consequentialists in the effective altruism space (e.g., Chappell 2022; Schubert and Caviola 2021; Oakley 2015, talk at an EA Global conference; Bykvist 2024, EA Stockholm's Nordic Talks Series) as well as by proponents of common sense morality and advocates for notions of basic human dignity across the social impact ecosystem (European Commission 2022; Jenn 2023). And in calling on people to pursue the more humble quest of finding one’s place in a collective project rather than striving to become an individual of outstanding impact, I employ notions familiar from satisficing theory (Slote and Pettit 1984; Ben-Haim 2021) and the literature on collective rationality (Finkel et al. 1989 (pdf); Gilbert 2006 (pdf); Byron 2008).

I don’t mean to throw the baby out with the bathwater: Looking for actions with an exceptionally high impact is not always entirely misguided

While this post clearly advocates for a reconsideration of the dominance of individualised impact evaluations within debates around how to make the world better, I do not want to denigrate considerations about individual contexts altogether. I think that it is probably often good and non-harmful to evaluate different actions with an eye to the impact each one is likely to have. I also think it can be sensible to rank some life and career paths in accordance with how much impact they may roughly afford (differentiating, for instance, between a net-negative role of a snakeoil salesperson, the probably neutral role of someone working a “bullshit job[8]; and the range of societally beneficially roles that lie beyond). But the crucial point I am trying to make in this essay is that this should not be the only lens to apply, that we should not forget about indirect effects and cumulative dynamics which individualised evaluation seem poorly-equipped to capture, and that we should not feel overly confident in how granular our impact attributions can be even if we put our best minds and methods to the task.

Remaining uncertainties and confusion

As mentioned before, I have spent quite some time thinking, reading and talking about the arguments in this essay. I think that I made decent progress on streamlining my ideas into a relatively coherent account. Even so, I remain uncertain about my final stance and its practical implications:

  • What is the crux between me and the people who buy into the narrative of power law distributions in impact?
  • To the extent that disagreements with my arguments persist after people have read and sincerely engaged with what I said here: How much should that shift my own confidence in my views, if at all?
  • I seem to[9] disagree fundamentally with one of effective altruism’s core propositions. What does or should that imply for my engagement with the community?
    • Is it dishonest if I apply to EA-ish positions, events, or funding programmes without alerting the applications committee very explicitly about that disagreement?
    • Is the disagreement a significant barrier to effective collaborations between myself and EA-minded people or organisations?
    • Conversely, is there some particular added imperative for me to engage in EA discussions, events, and initiatives, for the sake of raising intellectual diversity in these spaces and/or for the sake of exposing myself to cognitively challenging environments?
  • How much of my reasoning is a result of post hoc rationalisation of intuitions and cultural biases that I hold? If my thoughts on this were prompted by a cultural bias / unsubstantiated intuition, but I now cannot find major flaws in the reasoning I developed upon reflection, should I still be sceptical about my current views and conclusions?
    • Is every argument emerging from motivated/prejudiced reasoning poor or dubious? Or can an argument be sound, even if it was developed with the conclusion already in mind?
    • How different is this process from “the normal” belief formation process? How often do we form beliefs around important questions without having an intuition for the conclusion in mind a priori?

Acknowledgements

In the spirit of the essay, it would seem appropriate that I acknowledge and give credit to all the people who influenced my thinking and writing on this topic. But this list would be ridiculously long, burdensome to write, and time-consuming to get permission for (from all the people who I would name). I’ll thus refrain from naming anyone. Let it be known that I have benefitted from many, many people’s input to write this essay - whether they engaged with me directly on the topic, provided reading material to feed my thinking and help me defuse my confusion, shaped my epistemics and philosophy through conversations on other topics, or influenced my values and beliefs in yet more indirect ways. I am indebted to them all, and any good that this essay may accomplish is not credited to myself alone but must also be added to their impact scorecards (if keeping score is insisted upon in the first place).

Also worth noting: credits for the cover image go to Ian Stauffer (photo taken from Unsplash)


Appendix: Shifts in my thinking since publishing this post

Update 2024-03-27: Since publishing this essay, I have somewhat updated my views on the topic. I continue to endorse the majority of what is written above, though I would no longer phrase some of the conclusions as strongly/decisively as before. I have decided to leave the text largely as originally published, but inserted a modification in the Summary. This Appendix contains a more detailed account of how my thinking has evolved since publishing the essay.

Differences in perspectives on impact assessment/modeling are a pretty big deal for thinking about the issues I address in the essay

On the empirical validity of the power law distribution of social impact

  • I’ve tentatively updated to the view that for some types of decision situations, counterfactual impact (at least to the extent that we can reasonably measure it) follows a power law distribution. [Thanks to comments by Jeff Kaufman, Denis, Larks, Oscar Delaney and Brad West, whose specific examples and pointed questions prompted and aided my reconsideration of this point]
  • I think that considerations around supporting and enabling actions flatten the distribution significantly compared to a more naive evaluation, but that it may well often remain power law distributed. [Thanks to Owen Cotton-Barrat for spelling out a similar thought in comments.] 
  • I think that considerations around uncertainty flatten the distribution of expected impact, possibly rendering it non-power-law distributed in many cases, but probably not in all. At least, there will still be cases where the expected impact of a solidly good action is clearly magnitudes smaller than the impact of one of the best actions (e.g., donations to AMF, which can simply be scaled up to be orders of magnitude higher). [Thanks to Owen Cotton-Barrat, Denis and Jeff Kaufman for spelling out a similar thought in comments.] 
  • I retain the view that considerations around enabling and supporting actions are important and under-appreciated in most EA writing and conversations I have encountered. In part, as stated above, this is because they affect our view of counterfactual impact differences. But more strongly, I value these considerations a) because they may help ameliorate or guard against the adverse effects I mention; and b) because they open up avenues for considering alternative models of impact, which I think may be highly valuable complements to and sometimes appropriate substitutes for counterfactual reasoning.

On conceptual issues

  • I remain confused
  • The last few days have spurred a lot of further thinking on this topic, and I’m grateful for all the comments which have contributed to that. I think I know have a more well-considered take on the topic than before, which I consider progress. But I’m still relatively far from a good grasp of the topic overall and my views in particular. The quest for clarity will continue.
  • I (tentatively?) continue to think that purely individualised perspectives on impact lack something relevant: 
    • They make it hard to recognize/conceptualize the importance of engaging in collective projects where one person’s individual, expected, and measurable counterfactual impact may be low (or unclear, because expectations depend too much on subjective plausibility judgements regarding the long-running or higher-order effects of different actions), but the expected impact of the collective project can be vast. 
    • They make it easy to dismiss and discount the thousands (sometimes millions) of people whose supportive and enabling services we all depend on to have any meaningful impact. These people may be replaceable from a counterfactual point of view, but they are not irrelevant - if we lived in a world without communities of people performing these supportive roles, we would be screwed. This seems an important fact to keep in mind when looking at the world and at efforts to make it better.

Practical consequences for my thinking & actions

  • I will talk about the topic with a few more people, I will read, and I will attempt to further reflect and puzzle out my thinking on this topic
  • I will consider applying counterfactual reasoning more seriously as one of the perspectives in my repertoire. Upon reflection, I can’t deny that I’ve been doing that throughout the last few years already, though more cautiously/humbly and with force on my decisions than I’ve seen in other EAs. I intend to retain that caution and readiness to switch to other perspectives to inform my decisions. But I think it would be useful to be more explicit about the ways in which I do use counterfactual impact evaluations when making decisions.
  • The value of writing up, sharing, and discussing my thoughts on thorny questions has increased in my mind. I did consider this valuable before, but the experience with this post (and its many constructive comments) has demonstrated that it is probably more valuable than I previously thought.

 

  1. ^

    Update 2024-03-15: I was alerted to possible problems with the title by this comment, questioning whether the original title ("There are no massive differences in impact between individuals") matched the actual content of the post. Following a bit of discussion on that comment (see comment thread), I have come to the conclusion that the original title was indeed suboptimal. Quoting from a comment I left below:

    "I can see how my chosen title may be flawed because a) it leaves out large parts of what the post is about (adverse effects, conceptual debate); and b) it is stronger than my actual claim"

    I hope the revised title does a better job communicating what this post is about. (And I hope I'm not violating some unwritten norm against changing the title of a post a day after publication?)

  2. ^

    This holds for a variety of values a person may hold, encompassing at least the following set of intrinsic goals: wellbeing, absence of (extreme) suffering, flourishing, rights-fulfilment, justice, species survival.

  3. ^

    As a friend who graciously reviewed this essay has rightly pointed out, virtues and behavioural heuristics are not absolute. In many real-life situations, the attempt to act virtuously will require context-specific evaluations to figure out which action is actually most in line with the person one wants to be. That means that even very common-sensical and usually acceptable heuristics such as “Thou shalt not lie” cannot be adopted as general and universally applicable rules; there will be situations when lying is the best choice one has. I think the crucial point is that it’s important to pay serious attention to questions of virtue and the cumulative consequences of this or that principle, not so much to live dogmatically by certain dead-set rules.

  4. ^

    To tie this in with a real-life example that will be familiar to many readers on this Forum: Similar notions around naive utilitarianism and the “the importance of integrity, honesty, and the respect of common-sense moral constraints” (MacAskill 2022) have been discussed at length in the aftermath of the FTX debacle, where a self-avowed utilitarian from within the effective altruism community committed major financial fraud, incurring massive costs for customers and clients; see, for instance, Samuel 2022, evhub 2022, and Karnofsky 2022.

  5. ^

    I imagine some readers will respond with sympathy to my claims around the difficulties of empirical measurement, but will reject the conclusion that difficulty equates futility. Readers may claim that it is possible to build models that reduce complexity to such an extent that all of the intervening variables mentioned above are neutralised. Certainly, building such models is possible - it is, from everything I’ve seen, the common practice in quantitative impact models, estimations, and assessments. But can it be done without completely losing touch with reality? Can there be models that are sufficiently simple to allow for individualised impact estimates, but which remain sufficiently true to the underlying reality to yield meaningful results about the world? I am yet to see an impact evaluation that achieves this feat, apportioning individual impact estimates in an epistemically trustworthy manner that doesn’t simply gloss over complex dynamics because they are too hard to conceptualise (let alone measure).

  6. ^

    If you found the previous few sentences confusing and/or somewhat incomplete, let me tell you: I have already advanced miles from the muddled and incoherent state of mind that I started with around a week ago (when I began reflecting on this essay). I can see how my thinking and writing on this point does not yet approach a fully developed argument for or against a specific conceptual framework. But it seems to me (and maybe I’m just trapped in my own brain and unable to imagine what my text will sound like to others) that I have at least reached a stage where my words are able to gesture at a comprehensible concern, or where they may even be able to inspire alternative ways of looking at things.

  7. ^

    I very intentionally write “reminiscent of.” By no means do mea to suggest that I fully understand, let alone pretend to reproduce or even elaborate on, Kantian ethics.

  8. ^

    Neutral for the world as a whole; I think Graeber argues convincingly that working a bullshit job is usually quite harmful to the worker herself.

  9. ^

    Not at all sure how many people in the wider EA community would partially or fully agree with the points I make in this essay. I’d be curious to hear more on that through comments or private messages in response to this post. That said, in spite of some cautionary and sympathetic statements by people at the core of the EA community (Todd 2023, Karnofsky 2022), it seems clear that my post is in tension with the claim that there are enormous differences between interventions to make the world better and that individuals should seek to maximise for the highest-impact options they can achieve - a claim is at the core of most official self-descriptions of effective altruists and core effective altruist organisations (e.g.: Centre for Effective Altruism, see Prioritization bullet point).

Comments89
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think it might be helpful to look at a simple case, one of the best cases for the claim that your altruistic options differ in expected impact by orders of magnitude, and see if we agree there? Consider two people, both in "the probably neutral role of someone working a 'bullshit job'". Both donate a portion of their income to GiveWell's top charities: one $100k/y and the other $1k/y. Would you agree that the altruistic impact of the first is, ex-ante, 100x that of the second?

This is a good question. I think, if we assume everything else equal (neither got the money by causing harm, both were influenced by roughly the same number of actors to be able and willing to donate their money), then I think I agree that the altruistic impact of the first is 100x that of the second.

I am not entirely sure what that implies for my own thinking on the topic. On the face of it, it clearly contradicts the conclusion in my Empirical problem section. But it does so without, as far as I can tell, addressing the subpoints I mention in that section. Does that mean the subpoints are not relevant to the empirical claim I make? They seem relevant to me, and that seems clear in examples other than the one you presented. I'm confused, and I imagine I'll need at least a few more days to figure out how the example you gave changes my thinking.

Update: I am currently working on a Dialogue post with JWS to discuss their responses to the essay above and my reflections since publishing it. I imagine/hope that this will help streamline my thinking on some of the issues raised in comments (as well as some of the uncertainties I had while writing the essay). For that reason, I'll hold of... (read more)

I'm from a middle-income country, so when I first seriously engaged with EA, I remember how the fact that my order-of-magnitude lower earnings vs HIC folks proportionately reduced my giving impact made me feel really sad and left out. 

It's also why the original title of your post – the post itself is fantastic; I resonate with a lot of the points you bring up – didn't quite land with me, so I appreciate the title change and your consideration in thinking through Jeff's example.

7
Sarah Weiler
New Update (as of 2024-03-27): This comment, with its very clear example to get to the bottom of our disagreement, has been extremely helpful in pushing me to reconsider some of the claims I make in the post. I have somewhat updated my views over the last few days (see the section on "the empirical problem" in the Appendix I added today), and this comment has been influential in helping me do that. Gave it a Delta for that reason; thanks Jeff! While I now more explicitly acknowledge and agree that, when measured in terms of counterfactual impact, some actions can have hundreds of times more impact than others, I retain a sense of unease when adopting this framing: When evaluating impact differently (e.g. through Shapley-value-like attribution of "shares of impact", or through a collective rationality mindset (see comments here and here for what I mean by collective rationality mindset)), it seems less clear that the larger donor is 100x more impactful than the smaller donor. One way for reasoning about this would be something like: Probably - necessarily? - the person donating $100,000 had more preceding actions leading up to the situation where she is able and willing to donate that much money and there will probably - necessarily? - be more subsequent actions needed to make the money count, to ensure that it has positive consequences. There will then be many more actors and actions between which the impact of the $100,000 donation will have to be apportioned; it is not clear whether the larger donor will appear vastly more impactful when considered from this different perspective/measurement strategy... You can shake your head and claim - rightly, I believe - that this is irrelevant for deciding whether donating $100,000 or donating $1,000 is better. Yes, for my decision as an individual, calculating the possible impact of my actions by assessing the likely counterfactual consequences resulting directly from the action will sometimes be the most sensible thing
2
Vasco Grilo🔸
Interesting question, Jeff! I personally think that donating more in that case would be more impactful, but the answer is not totally clear to me: * If one believes boosting economic growth is a better proxy for contributing to a better world than increasing human welfare, I think saving lives in high income countries may be better than in low income countries. Therefore donating less can potentially be better to increase economic growth by keeping more resources in higher income countries. * It is not obvious that saving lives is net positive accounting for effects on animals.

I find myself sympathetic to a lot of what you write, while being in disagreement with some of your top-level conclusions (in some cases as-written; in some cases more a disagreement with the vibe of what's being said).

To elaborate:

  • I think that you're primarily pointing at a bunch of problems that can come from people inhabiting the "power-law distribution" perspective on impact and pursuing the tails
  • I think that these are real and important problems, and I think that they are sometimes underappreciated in EA circles, and sometimes things would be better if people less inhabited this mentality
  • Structurally, these problems give us some reason to reduce emphasis on the claim, but they don't (by themselves) cast doubt on the claim
  • You have one argument casting doubt on the claim (what you call the "empirical problem" of the difficulty of impact attribution)
    • I basically agree that this is an issue which muddies the waters, and somewhat levels the distribution of impact compared to a more naive analysis
    • However, I think that after you sort through this kind of consideration you would be able to recover some version of the power law claim basically intact
  • To my mind the stronger reason for sc
... (read more)
9
Sarah Weiler
Thanks for that thoughtful comment! * Agree that the adverse effects that I dedicate a large part of the post to do not speak to the question of whether impact actually follows a power-law distribution. They are just arguments against thinking about impact in that way. I think I acknowledge that repeatedly in the post, but can see now that the title makes it sound like I focus mainly on the "Empirical problem". * "I think that after you sort through this kind of consideration you would be able to recover some version of the power law claim basically intact" - I wonder if our disagreement on that is traceable and resolvable, or whether it stems from some pretty fundamental intuitions which it's hard to argue about sensibly? * ex ante vs. ex post: Interesting that you raise that! I've talked to a few people about the ideas in the essay, and I think something like your argument here was the most common response. I think I remain more persuaded by the claim that impact is not power law distributed at all, even ex post and not just because we don't have the means to predict ex ante. But I agree that the case for a power law distribution is harder to defend ex ante (because of all the uncertainty) than ex post, and my confidence in doubting the claim is stronger for ex ante impact evaluations than it is for ex post evaluations. * True and good point that I basically ignored the benefits of power-law thinking. I'll consider whether I think my thoughts on these benefits can fit somewhere in the essay, and will update it accordingly if I find an appropriate fit. Thanks for pointing this out! * Your conclusion sounds largely agreeable to me (though I imagine we would disagree when asked to specify how large the "tail-upsides" are that people should look for in a cautious manner).  
9
Owen Cotton-Barratt
I'm definitely a little surprised to hear that you don't think that impact is power-law distributed at all, even ex post. I wonder if it's worth trying to get numerical about this, rather than talk qualitatively about "whether impact is power-law distributed". Because really it's the quantitative ratios that matter rather than the exact nature of the distribution out in the tails (e.g. I doubt the essential disagreement here is about whether it's a power law vs a lognormal). If you restrict to people who are broadly trying to do good with their work (at least a little bit), I'd be interested if you would offer guesses about the ratios (ex post) in impact comparing someone at the 90th centile to e.g. someone at the 50th centile; someone at the 99th centile; someone at the 99.99th centile. (I think it's kind of hard to produce numbers for these things because of course there's massive amounts of uncertainty, but my guess is that these four points would be spread out by somewhere between 2 and 4 orders of magnitude.) And how much spread do we need to get here in order to justify a lot of attention going into looking for tail-upsides? Of course the exact amount of effort that's appropriate will vary with what you think of these tails, but if you think that some of your options might be twice as good (in expectation) than others, that's already enough to justify a lot of attention trying to make sure you find the good ones. Notes on why I tend to expect something like a power law: * Some of my reason is looking at (what I understand of) the historical distribution of impact. It's certainly a bit flatter than a naive analysis would suggest after accounting for a bunch of the credit-sharing issues, selection effects in what we hear about, etc.; but I still think it will go like something along these lines. * Some of my reason is looking at distributions for some related things (like job productivity for jobs of various levels of complexity). * Some of my reason is h
6
Sarah Weiler
Appreciate the attempt to make headway on the disagreement! I feel pretty lost when trying to quantify impact at these percentiles. Taking concerns about naive attribution of impact into consideration, I don't even really know where to start to try to come up with numbers here. I just notice that I have a strong intuition, backed up by something that seems to me like a plausible claim: given that myriad actors always contribute to any outcome, it is hard to imagine that there is one (or a very few) individual(s) that does all of the heavy lifting... "And how much spread do we need to get here in order to justify a lot of attention going into looking for tail-upsides?" -- Also a good question. I think my answer would be: it depends on the situation and how much up- or downsides come along with looking for tail-upsides. If we're cautious about the possible adverse effects of impact maximizing mindsets, I agree that it's often sensible to look for tail-upsides even if they would "only" allow us to double impact. Then there are some situations/problems where I believe the collective rationality mindset, which looks for "how should I and my fellows behave in order to succeed as a community" rather than "how should I act now to maximize the impact I can have as a relatively direct/traceable outcome from my own action?"
9
Owen Cotton-Barratt
Re: I want to note that this property isn't a consequence of a power-law distribution. (It's true of some power laws but not others, depending on the exponent.) I think you're right about this in most cases (though in some domains like theoretical physics I think it's more plausible that most of the heavy lifting gets done by a few people).  But even if there aren't a small number of individuals doing all the heavy lifting, it can still be the case that some people are doing far more than others. For example think of income distribution: it definitely isn't the case that just a few people earn most of the money, but it definitely is the case that some people earn far more than others. If you were advising someone on how to make as much money as possible, you wouldn't tell them to chase after the possibility that they could be in the 0.0001%, but you would want them to have an awareness of the shape of the distribution, and some idea of how to find high-paying industries; and if you were advising a lot of people you'd probably want to talk about circumstances in which founding a company would make sense.

Perhaps we could promote the questions:

  • 'How can I help facilitate the most good?', or
  • 'How can I support the most good?'

and not the question:

  • 'How can I do the most good?'

Similar reframes might acknowledge that some efforts help facilitate large benefits, while also acknowledging all do-gooding efforts are ultimately co-dependent, not simply additive*? I like the aims of both of you, including here and here, to capture both insights.

(*I'm sceptical of the simplification that "some people are doing far more than others". Building on Owen's example, any impact of 'heavy lifting' theoretical physicists seems unavoidably co-dependent on people birthing and raising them, food and medical systems keeping them alive, research systems making their research doable/credible/usable, people not misusing their research to make atomic weapons, etc. This echos the points made in the 'conceptual problem' part of the post)

For instance, most people will probably put the US president into the category “high-impact individual.” And there are certainly many impactful things the president can do that are not accessible to most other people. But for achieving most goals that can be said to really matter - a better healthcare system that actually improves wellbeing in the country; effective technology policy that actually reduces risks and/or advances life-improving technological developments; a sufficient response to the climate crisis; etc. - presidents themselves will tell you how incredibly constrained they are in bringing about these outcomes. The impact a president can have through sensible policies is determined by the actions of many other individuals (domestically and internationally), and it is also determined by the culture he or she operates in (the ideas that are considered normal, palatable, or even just conceivable). Yes, this individual can have an impact through their actions, but only in conjunction with the actions of many others. If we tried to account for all individuals that form part of the president’s enabling infrastructure (again, I will argue below that we probably can’t and shou

... (read more)
2
Owen Cotton-Barratt
FWIW my guess is that if you compare (lifetime impact of president):(lifetime impact of average member of congress), the ratio would be <100 (but >30).
2
Larks
I'm surprised you think that low, especially considering the President often will have been a Senator or Governor or top businessman before office, so the longer average term in Congress is not a big advantage. 
2
Owen Cotton-Barratt
I think I was leaning into making my guess sound surprising there, and I had in mind something closer to 100 than 30; it might have been better to represent it as "about 100" or ">50" or something. The fact that presidential terms are just 4 or 8 years does play into my thinking. For sure, they've typically done other meaningful stuff, but I don't think that typically has such a high impact ratio as their years as president. I generated my ratio by querying my brain for snap judgements about how big a deal it would seem to have [some numbers of presidents] [do a thing over their career] vs [some fraction of congress]. Anyway I could certainly be wrong here. I think it's possible I'm underestimating how big is the impact of having the mouthpiece of the presidency.

Thanks for writing this! "EA is too focused on individual impact" is a common critique, but most versions of it fall flat for me. This is a very clear, thorough case for it, probably the best version of the argument I've read.

I agree most strongly with the dangers of internalizing the "heavy-tailed impact" perspective in the wrong way, e.g. thinking "the top people have the most impact -> I'm not sure I'm one of the top people -> I won't have any meaningful impact -> I might as well give up." (To be clear, steps 2, 3, and 4 are all errors: if there's a decent chance you're one of the top, that's still potentially worth going for. And even if not--most people aren't--that doesn't mean your impact is negligible, and certainly doesn't mean doing nothing is better!)

I mostly disagree with the post though, for some of the same reasons as other commenters. The empirical case for heavy-tailed impact is persuasive to me, and while measuring impact reliably seems practically very hard / intractable in most cases, I don't think it's in principle impossible (e.g. counterfactual reasoning and Shapley values).

I'm also wary of arguments that have the form "even if X is true, believing / ... (read more)

8
Sarah Weiler
Thanks for your comment, very happy to hear that my post struck you as clear and thorough (I'm never sure how well I do on clarity in my philosophical writing, since I usually retain a bit of confusion and uncertainty even in my own mind). I agree that many dangers of internalizing the "heavy-tailed impact" perspective in the wrong way are due to misguided inference, not a strictly necessary implication of the perspective itself. Not least thanks to input from several comments below, I am back to reconsidering my stance on the claims made in the essay around empirical reality and around appropriate conceptual frameworks. I have tangentially encountered Shapley values before but not yet really tried to understand the concept, so if you think they could be useful for the contents of this post, I'll try to find the time to read the article you linked; thanks for the input!  I share the wariness that you mention re "arguments that have the form "even if X is true, believing / saying it has bad consequences, so we shouldn't believe / say X."". At the same time, I don't think that these arguments are always completely groundless (at least the arguments around refraining from saying something; much more inclined to agree that we should never believe something just for the sake of supposed better consequences from believing it). I also tend to be more sympathetic to these arguments when X is very hard to know ("we don't really have means to tell whether X is true, and since believing in X might well have bad side-effects, we should not claim that X and we should maybe even make an effort to debunk the certainty with which others claim that X"). But yes, agree that wariness (though maybe not unconditional rejection) around arguments of this form is generally warranted, to avoid misguided dogmatism in the flawed attempt to prevent (supposed) information hazards.
8
Dawn Drescher
Shapley values are a great tool for divvying up attribution in a way that feels intuitively just, but I think for prioritization they are usually an unnecessary complication. In most cases you can only guess what they might be because you can't mentally simulate the counterfactual worlds reliably, and your set of collaborators contains billions of potentially relevant actors. But as EAs we can “just” choose whatever action will bring about the world history with the greatest value regardless of any impact attribution to ourselves or anyone.  I like the Shapley value and think it would make similar recommendations, but it adds another layer of infeasability (and arbitrariness) on top of an already infeasably complex optimization problem without adding any value. Then again many of us are strongly motivated by “number go up,” so Shapley values are probably helpful for self-motivation. :-3 (I think if EAs were more individualist, “the core” from cooperative game theory would be more popular than the Shapley value.) Oh, and we get so caught up in the object-level here that we tend to fail to give praise for great posts: Great work writing this up! When I saw it, it reminded me of Brian Tomasik's important article on the same topic, and sure enough, you linked it right before the intro! I'm always delighted when someone does their research so well that whatever random spontaneous associations I (as a random reader) have are already cited in the article!
1
Sarah Weiler
From what I've learned about Shapley values so far, this seems to mirror my takeaway. I'm still giving myself another 2-3 days until I write up a more fleshed-out response to the commenters who recommended looking into Shapley values, but I might well end up just copying some version of the above; so thanks for formulating and putting it here already! I do not understand this point but would like to (since the stance I developed in the original post went more in the direction of "EAs are too individualist"). If you find the time, could you explain or point to resources to explain what you mean by "the core from cooperative game theory" and how that links to (non-)individualist perspectives, and to impact modeling? Very glad to read that, thank you for deciding to add that piece to your comment :)!
4
Dawn Drescher
  Nice! To be sure, I want to put an emphasis on any kind of attribution being an unnecessary step in most cases rather than on the infeasibility of computing it. There is complex cluelessness, nonlinearity from perturbations at perhaps even the molecular level, and a lot of moral uncertainty (because even though I think that evidential cooperation in large worlds can perhaps guide us toward solving ethics, that'll take enormous research efforts to actually make progress on), so infeasibility is already the bread and butter of EA. In the end we’ll find a way to 80/20 it (or maybe -80/20 it, as you point out, and we'll never know) to not end up paralyzed. I've many times just run through mental “simulations” of what I think would've happened if any subset of people on my team had not been around, so this 80/20ing is also possible for Shapley values. If you do retroactive public goods funding, it's important that the collaborators can, up front, trust that the rewards they'll receive will be allocated justly, so being able to pay them out in proportion to the Shapley value would be great. But as altruists, we're only concerned with rewards to the point where we don't have to worry about our own finances anymore. What we really care about is the impact, and for that it's not relevant to calculate any attribution. I might be typical-minding EAs here (based on me and my friends) but my impression is that a lot of EAs are from lefty circles that are very optimistic about the ability of a whole civilization to cooperate and maximize some sort of well-being average. We've then just turned to neglectedness as our coordination mechanism rather than long, well-structured meetings, consensus voting, living together and other such classic coordination tools. In theory (or with flexible resources, dominant assurance contracts, and impact markets) that should work fine. Resources pour into campaigns that are deemed relatively neglected until they are not, at which point the re
2
Alex Semendinger
I agree with just about everything in this comment :) (Also re: Shapley values -- I don't actually have strong takes on these and you shouldn't take this as a strong endorsement of them. I haven't engaged with them beyond reading the post I linked. But they're a way to get some handle on cases where many people contribute to an outcome, which addresses one of the points in your post.)
3
Sam_Coggins
I'm skeptical that Shapley values can practically help us much in addressing the 'conceptual problem' raised by the post. See critique of estimated Shapley values in another comment on this post Thanks for the considered and considerate discussion

I don't see Shapley values mentioned anywhere in your post. I think you've made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.

5
Sam_Coggins
Wouldn't estimating Shapley values still miss a core insight of the post - that 'do-gooding' efforts are ultimately co-dependent, not simply additive? EXAMPLE: We can estimate the Shapley values for the relative contributions of different pieces of wood, matches, and newspaper to a fire. These estimated Shapley values might indicate that biggest piece of wood contributed the most fire, but miss three critical details: 1. The contribution of matches and newspaper was 'small' but essential. This didn't come up in our estimated Shapley values because our dataset didn't include instances where there was no matches or no newspaper 2. Kindling was also an essential contributor but was not included in our calculations 3. The accessibility of fire inputs had their own interacting inputs, e.g. a trusting social and economic system that enabled us to access the fire inputs We also make the high-risk assumption that the fire would be used and experienced beneficially INTERPRETED IMPLICATION: estimated Shapley values still miss, at least in part, that outcomes from our efforts are co-dependent. We therefore still mislead ourselves by attempting to frame EA as an independent exercise? (I'm not confident on this and would be keen to take on critiques)
1
Alex Semendinger
Unless I'm misunderstanding, isn't this "just" an issue of computing Shapley values incorrectly? If kindling is important to the fire, it should be included in the calculation; if your modeling neglects to consider it, then the problem is with the modeling and not with the Shapley algorithm per se. Of course, I say "just" in quotes because actually computing real Shapley values that take everything into account is completely intractable. (I think this is your main point here, in which case I mostly agree. Shapley values will almost always be pretty made-up in the best of circumstances, so they should be taken lightly.) I still find the concept of Shapley values useful in addressing this part of the OP: I read this as sort of conflating the claims that "impact can't be solely attributed to one person" and "impact can't be sensibly assigned to one person." Shapley values help with assigning values to individuals even when they're not solely responsible for outcomes, so it helps pull these apart conceptually. Much more fuzzily, my experience of learning about Shapley values took me from thinking "impact attribution is basically impossible" (as in the quote above) to "huh, if you add a bit more complexity you can get something decent out." My takeaway is to be less easily convinced that problems of this type are fundamentally intractable.

I'm not sure about the factual/epistemic aspects of it, but there is at least some element here that seems at least somewhat accurate.

It has always struck me as a bit odd to glorify an individual for accomplishing X or donating Y, when they are only able to do that because of the support they have received from others. To be trivially simplistic: could I have done any of the so-called impressive things that I have done without support from a wide away of sources (stable childhood home, accessible public schools of decent quality, rule of law, guidance from mentors and friends, etc.). Especially in the context of EA, in which so many of us are so incredibly privileged and fortunate (even if we are only comparing within our own countries). So many people in EA come from wealthy families[1], attended prestigious schools, and earn far more than the median income for their country.

I sometimes look at people that I view as successful within a particular scope and I wonder "if my parents could have afforded tutors for me would I have ended up more like him?" or "if someone had introduced me to [topic] at age 13 would I have ended up a computer engineer?" or "If my family had lived in and ... (read more)

I think there are several different activities that people call "impact attribution", and they differ in important ways that can lead to problems like the ones outlined in this post. For example:

  1. if I take action A instead of action B, then the world will be X better off,
  2. I morally "deserve credit" in the amount of X for the fact that I took action A instead of B.

I think the fact that any action relies enormously on context, and on other people's previous actions, and so on, is a strong challenge to the second point, but I'd argue it's the first point that should actually influence my decision-making. If other people have already done a lot of work towards a goal, but I have the opportunity to take an action that changes whether their work succeeds or fails, then for sure I shouldn't get moral credit for the entire project, but when asking questions like "should I take this action or some other?" or "what kinds of costs should I be willing to bear to ensure this happens?", I should be using the full difference between success and failure as my benchmark. (That said, if "failure" means "someone else has to take this action instead" rather than "it's as if none of the work was done", the benchmark should be comparing with that instead, so you need to ensure you are comparing the most realistic alternative scenarios you can.)

4
Owen Cotton-Barratt
I mostly-disagree with this on pragmatic grounds. I agree that that's the right approach to take on the first point if/when you have full information about what's going on. But in practice you essentially never have proper information on what everyone else's counterfactuals would look like according to different actions you could take. If everyone thinks in terms of something like "approximate shares of moral credit", then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if they'd all done something different. Doing this properly might mean impact markets (where the "market" part works as a mechanism for distributing cognition, so that each market participant is responsible for thinking through their own alternative options, and feeding that information into the system via their willingness to do work for different amounts of pay), but I think that you can get some rough approximation to the benefits of impact markets without actual markets by having people do the things they would have done with markets -- and in this context, that means paying attention to the share of credit different parties would get.
2
Ben Millwood🔸
Is it at least fair to say that in situations where the other main actors aren't explicitly coordinating with you and aren't aware of your efforts (and, to an approximation, weren't expecting your efforts and won't react to them), you should be thinking more like I suggested?
2
Owen Cotton-Barratt
I think maybe yes? But I'm a bit worried that "won't react to them" is actually doing a lot of work. We could chat about more a concrete example that you think fits this description, if you like.
2
Charlie Harrison
Thank you for writing this piece, Sarah! I think the difference stated above between: A) counterfactual impact of an action, or a person; B) moral praise-worthiness is important.  You might say that individual actions, or lives have large differences in impact, but remain sceptical of the idea of (intrinsic) moral desert/merit – because individuals' actions are conditioned by prior causes. Your post reminded me a lot of Michael Sandel's book, The Tyranny of Merit. Sandel takes issue with the attitude of "winners" within contemporary meritocracy who see themselves as deserving of their success. This seems similar to your concerns about hubris amongst "high-impact individuals" .

I wonder if the purpose for which we are assessing impact might be relevant here. As Joseph's comment implies, sometimes people rely on assessments of impact to "glorify" certain individuals. I think some of your critiques have particular force when someone is using impact to do something of that nature. The issues you describe cause many impact assessments to be biased significantly upward, and I think it is almost always better to err on the side of humility when heaping glory on high-status individuals. 

At the same time, there are a number of reasons I might be trying to assess impact for which your critiques seem less relevant. For instance, if I'm deciding what career to pursue, the idea that "the impact from an individual's action can also be attributed to those people who influenced the individual into taking the action in the first place" isn't really relevant to the decisionmaking process. Likewise, if I were trying to decide whether to spend resources influencing someone else's career decision, I know that the prior and current influence of the person's parents, teachers, peers, (possibly) religious community, etc. would play a huge role in the outcome. But I don't see why it would be wise to decide whether to spend those resources on that task only after re-allocating most of the impact from the possible better career choice to those other influences.

4
Brad West🔸
I was contemplating writing something similar... The question of whether a person is worthy of all the "praise credit" is different than the question of whether the valuable outcome is causally attributable to the agent.
3
Sarah Weiler
Thanks for reiterating the distinction, it seems quite helpful to the topic (on first consideration; I'll have to mull this over a bit more over the next few days to really understand how the distinction fits into and may shift my thinking)! I partially (largely?) agree with your comments. It seems right that in specific decision-situations, it will often not be relevant to consider how prior influences account for (and take away from the individual impact of) my own actions or the actions of a person I'm trying to influence. But I do think that it's useful to remain aware of the fact that our actions are so heavily influenced by others, and especially that our actions will in turn contribute to influencing the behaviour (and thoughts, and attitudes) of many other people. Remaining aware of that fact seems to push away from evaluating actions only on the basis of how much counterfactual impact one can expect from that one isolated action: the fact that all actions are always the result of many preceding actions, where (often) every single preceding action only contributes a small part to shaping the resulting action, makes conceivable the idea that actions with low direct counterfactual impact can still be quite important and justifiable when considered from a perspective of behavioural heuristics or collective rationality (both of which recognise that some hugely important outcomes can only ever be attained if many people decide to take actions that have low expected impact on their own).
2
Jason
I can't exactly put my finger on why I think this, but I suspect that EA impact analyses missing this sort of potential impact is -- as a practical matter -- relatively less important where the proportion of activity/funding in a cause area is EA-aligned than where a significant proportion is so aligned. If 95%+ of the funding/actors in a cause area are fairly attuned to theories of change like that described above, then it seems less likely that there are many stellar opportunities for that kind of impact that the remaining 5% are leaving on the table.  In those circumstances, largely discounting this kind of impact may make sense in many cases. And from a global health perspective, trying to make decisions based on estimates of this kind of impact would pull EA global health away from its data-driven roots.
3
Sarah Weiler
I think I agree that the perspective I describe is far less relevant/valuable when 95% of actors hold and act in accordance with that perspective already. In those cases, it is relatively harmless to ignore the collective actions that would be required for the commom good because one can safely assume that the others (the 95%) will take care of those collective efforts by themselves. But when it comes to "the world's most pressing problems," I don't have the sense that we have those 95% of people to rely on to deal with the collective action problems. And I think that, even if the situation is such that 95% of other people take care of collective efforts thus leaving room for 5% to choose actions unconstrained by responsibilities for those collective action needs, it remains useful and important to keep the collective rationality perspective in mind, to remember how much one relies on that large mass of people doing relatively mundane, but societally essential tasks. I strongly sympathise with the concern of EA (or anyone) being pulled away from a drive to take action informed by robust data! I think especially for fields like Global Health (where we do have robust data for several, though not all, important questions), my response would be to insist that data-driven attempts to find particularly good actions as measured by their relatively direct, individual, counterfactual impact can, to some extent, coexist with and be complemented by a collective rationality perspective. The way I imagine decision-making when based on both perspectives is something like: an action can be worth taking either because it has an exceptionally large expected counterfactual impact (e.g., donations to AMF); or it can be worth taking because a certain collective problem will not be solved unless many people take that kind of action (e.g., donations to an org that somehow works to dismantle colonial-area stereotypes and negative self-images in a localised setting within a formerly colo
8
Jason
I had global health in mind -- the vast majority of the funding and people on board are not EAs or conducting EA-type analyses (although many are at least considering cost-effectiveness). Even in global health, I can see some circumstances in which EA's actions could be important for collective-action purposes. Tom Drake at CGD shared some insightful commentary about funding of global-health work last year that would require widespread cooperation from funders to execute. I noted my view in a response: that even if we agreed with the plan, there is no clear action for EA at this time, because the big fish (national governments and Gates) would need to tentatively commit to doing the same if a supermajority of funders made similar commitments. The rest of this is going to be fairly abstract because we're looking at the 100,000 foot level. If I understand the collective-action issue you raise correctly, it reminds me a little of the "For want of a nail" proverb, in which the loss of a single nail results in the loss of an entire kingdom. It's a good theory of change in certain circumstances. The modern version might read: For want of a competently-designed ballot, the US invaded Iraq. Or a combination of very small vote-changing efforts (like a few local efforts to drive voters to the polls) could have changed history. It's at least possible to estimate the expected impact of switching 100 votes in a swing state in a given election given what other actors are expected to do. It's not easy, and the error bars are considerable, but it can be done. Admittedly, that example has short feedback loops compared to many other problems. Although determining which collective-action problems are worth committing time and energy to is difficult, I think there are some guideposts. First, is there a coherent and plausible theory of change behind the proposed efforts? For many different forms of activism, I sense that the efforts of many actors in the space are much too affected
3
Sarah Weiler
Quick point on this: I didn't mean to suggest that EAs constitute vastly more than 5% of people working on pressing problems. Completely agree that "the vast majority of the funding and people on board [in global health] are not EAs or conducting EA-type analyses", but I still think that relatively few of those people (EA or not) approach problems with a collective rationality mindset, which would mean asking themselves: "how do I need to act if I want to be part of the collectively most rational solution?" rather than: "how do I need to act if I want to maximise the (counterfactual) impact from my next action?" or, as maybe done by many non-EA people in global health: "how should I act given my intuitive motivations and the (funding) opportunities available to myself?". I think - based on anecdotal evidence and observation - that the first of these questions is not asked enough, inside EA and outside of it. I think it's correct that some collective action problems can be addressed by individuals or small groups deciding to take action based on their counterfactual impact (and I thank you for the paper and proverb references, found it helpful to read these related ideas expressed in different terms!). In practice, I think (and you seem to acknowledge) that estimating that counterfactual impact for interventions aimed at disrupting collective action problems (by convincing lots of other people to behave collectively rational) is extremely hard and I thus doubt whether counterfactual impact calculations are the best (most practicable) tool for deciding whether and when to take such actions (I think the rather unintuitive analysis by 80,000 Hours on voting demonstrates the impracticability of these considerations for everyday decisions relatively well). But I can see how this might sometimes be a tenable and useful way to go. I find your reflections on how to do this interesting (checking for a plausible theory of change; checking for the closeness of reaching a requi
3
Jason
I think that's right. But if I understand correctly, a collective rationality approach would commend thousands of actions to us, more than we can do even if we went 100% with that approach. So there seemingly has to be some way to triage candidate actions. More broadly, I worry a lot about what might fill the vacuum if we significantly move away from the current guardrails created by cost-effectiveness analysis (at least in neartermism). I think it is awfully easy for factors like strength of emotional attachment to an issue, social prestige, ease of getting funding, and so forth to infect charitable efforts. Ideally, our theories about impact should be testable, such that we can tell when we misjudged an initiative as too promising and redirect our energies elsewhere. I worry that many initiatives suggested by a collective rationality approach are not "falsifiable" in that way; the converse is that it could also be hard to tell if we were underinvesting in them. So, at EA's current size/influence level, I may be willing to give up on the potential for working toward certain types of impact because I think maintaining the benefits of the guardrails is more important. Incidentally, one drawback of longtermist cause areas in general for me is the paucity of feedback loops, often hazy theories of change, and so on. The sought-after ends for longtermism are so important (e.g., the continuation of humanity, avoidance of billions of death from nuclear war) that one can reasonably choose to overlook many methodological issues. But -- while remaining open to specific proposals -- I worry that many collective-rationality-influenced approaches might carry many of the methodological downsides of current longtermist cause areas while often not delivering potential benefits at the same order of magnitude as AI safety or nuclear safety. To the extent that we're talking about EAs not doing things that are commonly done (like taking the time to cast an intelligent vote), I am ad

Wow, Sarah, what a wonderful essay!

(don't feel obliged to read or reply to this long and convoluted comment, just sharing as I've been pondering this since our discussion)

As I said when we spoke, there are some ideas I don’t agree with, but here you have made a very clear and compelling case, which is highly valuable and thought-provoking. 

Let me first say that I agree with a lot of what you write, and my only objections to the parts I agree with would be that those who do not agree maybe do very simplistic analyses. For example, anyone who thinks tha... (read more)

4
Sarah Weiler
Thanks a lot for that comment, Dennis. You might not believe it (judging by your comment towards the end), but I did read the full thing and am glad you wrote it all up! Put in this way, I have very little to object. Thanks for providing that summary of your takeaways, I think that will be quite helpful to me as I continue to puzzle out my updated beliefs in response to all the comments the essay has gotten so far (see statements of confusion here and here).  That's interesting. I think I hadn't really considered the possibility of putting really good teachers (and similar people-serving professions) into the super-high-impact category, and then my reaction was something like "If obviously essential and super important roles like teachers and nurses are not amongst the roles a given theory considers relevant and worth pursuing, then that's suspicious and gives me reason to doubt the theory." I now think that maybe I was premature in assuming that these roles would necessarily lie outside the super-high-impact category? I think the sentiment behind those words is one that I wrongfully neglected in my post. For practical purposes, I think I agree that it can be useful and warranted to take seriously the possibility that some actions will have much higher counterfactual impact than others. I continue to believe that there are downsides or perils to the counterfactual perspective, and that it misses some relevant features of the world; but I can now also see more clearly that there are significant upsides to that same perspective and that it can often be a powerful tool for making the world better (if used in a nuanced way). Again, I haven't settled on a neat stance to bring my competing thoughts together here, but I feel like some of your comments above will get me closer to that goal of conceptual clarification - thanks for that!

Regarding the impact attribution point-

You simply need to try to evaluate the world that would have transpired if not for a specific agent(s) actions. In the case of your vaccine creation and distribution, let's take the individual or team that created the initial vaccine and the companies (and their employees) that manufacture and distribute the vaccines.

If the individual or team did not did not create the initial vaccine, it likely would have been discovered later. On the other hand, if the manufacturers and distributors did not go into that manufacturin... (read more)

I think counterfactual analysis as a guide to making decisions is sometimes (!) a useful approach (especially if it is done with appropriate epistemic humility in light of the empirical difficulties). 

But, tentatively, I don't think that it is a valid method for calculating the impact an individual has had (or can be expected to have, if you calculate ex ante). I struggle a bit to put my thinking on this into words, but here's an attempt: If I say "Alec [random individual] has saved 1,000 lives", I think what I mean is "1,000 people now live because of Alec alone". But if Alec was only able to save those lives with the help of twenty other people, and the 1,000 people would now be dead were it not for those twenty helpers, then it seems wrong to me to claim that the 1,000 survivors are alive only because of Alec - even if Alec played a vital role in the endeavour and if it would have been impossible to replace Alec by some other random individual. And just because any one of the twenty people were easily replaceable, I don't think that they all suddenly count for nothing/very little in the impact evaluation; the fact seems to remain that Alec would not have been able to have a... (read more)

5
Joseph Lemien
It seems that you are gesturing toward the supporting roles that enabled or allowed Alec to save those lives. I find it both true (in this hypothetical scenario) that those lives were saved because of Alec's choices, and also that AleC's choices are in turn dependent on other things. This seems to echo some aspects of the ideas of dependent origination. If we really want to give "credit," then maybe we would have to use something vaguely analogous to exponential smoothing: Alec get's 80% of the credit, and the person before that gets 80%^2 of the remaining credit, the person before that gets 80%^3 of the remaining credit, etc. Also vaguely related, the book The Innovation Delusion has a section relating to this idea of giving credit and the idea of the enabling and supporting people that don't get credit for their contributions, describing it as a "cult of the inventor." Here is a small excerpt:
2
Brad West🔸
Yeah, I think the crux is that you want to weight counterfactual analysis less and myself and EAs generally think this is the ultimate question (at least to the extent consequentialism is motivating our actions as opposed to non-consequentialist moral considerations). I think that the way to evaluate Alec's impact is to say, if Alec had not taken action, would those thousand people be dead or would they be alive? (in this hypothetical, I'm assuming Alec is playing a founder role regarding a new intervention). Regarding the twenty other people, ask yourself if the same is true of them. If they are volunteering, would there have been others to volunteer, or would the project been able to procure the funds to fund employees? If they are working for pay, was their work such that the project would not have been able to happen without them? Maybe it is the case that some or all of these people were truly indispensable to the project, such that a proper impact analysis would attribute much or even most of the impact to the twenty people other than Alec. On the other hand, it may be the case that Alec secured funding to pay these twenty other people and if they had not taken the position, other competent people would. In this situation, provided that there were not other sources of funding for Alec, I would say an impact analysis would attribute half of the lives saved to Alec and half to the funder. I acknowledge that determining the counterfactual is hard (for instance, maybe the 20 workers freed up other actors to do other impactful work). But as the endpoint of analysis, I definitely think we should be trying to determine what the world looks like if we do X rather than if we did not do X, rather than if we do something that other people consider admirable or otherwise feels good.   EDIT: I realize you put "and those thousand people would not be saved but for the twenty others". If this is true, then the impact "credit" should definitely be spread among them. I thi

Thanks for writing this! I’ve long been suspicious of this idea but haven’t got round to investigating the claim itself, and my skepticism of it, fully, so I super appreciate you kicking off this discussion.

I also identify with ‘do I disagree with this empirically or am I just uneasy with the vibes/frame, how to tease those apart, ?'

For people who broadly agree with the idea that Sarah is critiquing: what do you think is the best defence of it, arguing from first principles and data as much as possible?

I have a couple of other queries/scepticisms about the... (read more)

1
OscarD🔸
On 2, I like this point about the distribution being shaped by the choices of others, I think it is quite true that if more people cared about impact it would be a lot harder to counterfatually achieve very high impact actions (because there would be so much 'competition' with other impact seekers). Reminiscent of how financial markets are pretty efficient because so many people are seeking to make money trading - I think if a similar number of people were looking to succeed in the 'impact market' there wouldn't be these super cost-effective low-hanging fruit left (lead elimination and the like). I think this then relates to point 1, as if there was an efficient impact market, it would be quite surprising for impact to be heavy-tailed. But as long as most people are focused on things other than impact I think my default assumption is it won't be too hard to find things that are a lot higher impact than the average. But I agree that this is not definitive and in areas like longtermist interventions where measurement is so hard we don't have empirical evidence of this.

I agree with most of what you write and share similar analyses. Because I still think that there is a lot of value in the EA community, I currently keep supporting it and engaging in it. But I also see the imperative to bring in further perspectives into the community. This can be quite straining in my experience, so I kind of 'choose my battles' given my capacities to contribute to alleviating ideological biases in the EA community. 
So thanks for your post and putting in the work to keeping these discussions going as well!

1
Sarah Weiler
The conclusion/mindset and approach you describe resonate a fair bit with me, thanks for spelling them out and leaving them hear as a comment!

the core message - some interventions are magnitudes more promising than others - was retained and even extended to other domains: from education, social programmes, and CO2 emissions reductions policies to efforts to change habits of meat consumption and voter turnout (Todd 2023)

 

Could you point me to the discussion of meat consumption in this source? I can't seem to find it. Thanks!

3
Sarah Weiler
Yikes, I linked to the wrong Todd article there, apologies! Meat consumption is mentioned in Todd 2021(2023): I'll add the source to that part of the essay, thanks for the alert!

I found the ideas in the post/comments clarifying and appreciate the considered, collaborative and humble spirit with which the post and most, if not all, comments were written. In alignment with the post's ideas, I hope this doesn't come across as over-attribution of impacts to individuals! I just appreciate the words people added here, the environment supporting them, and the people that caringly facilitated both

This might be a bit cute but I reckon the 1970 song 'Strangers' by 'The Kinks' illustrates some of the points in the post/comments quite nicely (explained by the songwriter here)

Great post, Sarah! I strongly upvoted it.

I agree that there is a massive difference between actions/strategies that are net-negative, neutral, and net-positive; in other words, I do agree that it is really important to figure out whether an action or strategy actually contributes to solving a problem at all, and whether it may cause unintended negative effects.

I think the possibility of quite harmful outcomes will tend to be associated with that of quite beneficial outcomes, so the tails will partially cancel out, which contributes towards mitigating impac... (read more)

3
Sarah Weiler
Thanks for the comment! Just to make sure I understand correctly: the tails would partially cancel out in expected impact estimates because many actions with potentially high positive impact could also have potentially high negative impact if any of our assumptions are wrong? Or were you gesturing at something else? (Please feel free to simply point me to the post you shared if the answer is continued therein; I haven't had the chance to read it carefully yet)
2
Vasco Grilo🔸
Yes. For example, cost-effectiveness analyses of global health and development interventions assume that saving lives is good, but this may not be so due to effects on animals. A lower cost to save a life will be associated not only with generating more nearterm human welfare per $ (right tail; good), but also with generating more nearterm animal suffering per $ (left tail; bad), since the people who were saved would likely consume factory-farmed animals (see meat eater problem).

Wow great essay Sarah, very thought-provoking and relevant I thought.

I have lots of things to say, I will split them into separate comments in case you want to reply to specific parts (but feel free to reply to none of it, especially given I see you have a dialogue coming soon). Or we can just discuss it all on our next call :) But I thought I would write them down while I remember.

3
OscarD🔸
Re the polio vaccine, I don't know much about it, but I think the inventors probably do deserve a lot of credit! Yes, lots and lots of people were needed to manufacture and distribute many vaccine doses, but I think the counterfactual is illustrative: the workers driving the trucks and going door to door and so forth seem very replaceable to me and it is hard to imagine a great vaccine being invented, but then not being rolled our because no-one is willing to take a job as a truck driver distributing the doses. Whereas if the inventors didn't invent it, maybe it would be years or decades before someone else did. But I can think of a case where inventors should get far less credit I think: if there is a huge prize for developing a vaccine, then quite likely lots of teams will try to do it, and if you are the winning team you might have only accelerated it by a few months. So in this case maybe the people who made/funded the prize get a lot of the credit. I really like your inclusion of people who have influenced us in thinking about how to apportion credit. For me personally, my parents sometimes muse that despite all the great things they have done directly, parenting my brother and I well may be the single biggest 'impact' of their lives. Of course it is hard to guess, but this seems at least plausible, and I think parenting (and more broadly supporting/mentoring/caring for other people) is really valuable!
2
Sarah Weiler
[The thoughts expressed below are tentative and reveal lingering confusion in my own brain. I hope they are somewhat insightful anyways.] Completely agree! The concept of counterfactual analysis seems super relevant to explaining how and why some of my takes in the original post differ from "the mainstream EA narrative on impact". I'm still trying to puzzle out exactly how my claims in "The empirical problem" link to the counterfactual analysis point - do I think that my claims are irrelevant to a counterfactual impact analysis? do I, in other words, accept and agree that impact between actions/people differs by several magnitudes when calculated via counterfactual analysis methods? how can I best name, describe, illustrate, and maybe defend the alternative perspective on impact evaluations that seems to inform my thinking in the essay and in general? what role does and should counterfactual analysis play in my thinking alongside that alternative perspective? To discuss with regards to the polio example: I see the rationale for claiming that the vaccine inventors are somehow more pivotal because they are less easily replaceable than all those people performing supportive and enabling actions. But just because an action is replacement doesn't mean it's unimportant. It is a fact that the vaccine discovery could not have happened and would not have had any positive consequences if the supporting & enabling actions had not been performed by somebody. I can't help myself, but this seems relevant and important when I think about the impact I as an individual can have; on some level, it seems true to say that as an individual, living in a world where everything is embedded in society, I cannot have any meaningful impact on my own; all effects I can bring about will be brought about by myself and many other people; if only I acted, no meaningful effects could possibly occur. Should all of this really just be ignored when thinking about impact evaluations and my personal d
2
OscarD🔸
I think this is a good framing! And I think I am happy to bite this bullet and say that for the purposes of deciding what to do it matters relatively little whether my action being effective relies on systems of humans acting predictably (like polio vaccine deliverers getting paid to do their job) or natural forces (atmospheric physics for a climate geoengineering intervention). Whereas regarding what is a virtuous attitude to have, yes probably it is good to foreground the many (sometimes small) contributions of other humans that help our actions have their desired impacts.
2
OscarD🔸
Finally, I really hope you do choose to stay at least somewhat involved in ~EA things, as you say having the added intellectual diversity is valuable I think. You are probably the sometimes-critic of EA conventions/dogmas whose views I am most moved by.
2
Sarah Weiler
Thanks a lot for taking the time to read the essay and write up those separate thoughts in response!! I'll get to the other comments over the next week or so, but for now: thank you for adding that last comment. Though I really (!) am grateful for all the critical and thought-provoking feedback from yourself and others in this comment thread, I can't deny that reading the appreciative and encouraging lines in that last response is also welcome (and will probably be one of the factors helping me to keep exercising a critical mind even if it feels exhausting/confusing at times) :D 
1
OscarD🔸
Good to hear! Yes I imagine having 50+ comments, many of them questioning/pushing-back, could be a bit overwhelming, from my perspective and I am guessing for others as well it is fine and reasonable if you choose not to engage now/ever. Putting this essay out into the world has already been a useful contribution to the discourse I think :)
2
OscarD🔸
I think elitism and inequality are real worries - I think it is lamentable but probably true that some people's lives will have far greater instrumental effects on the world than others. (But this doesn't change their intrinsic worth as an experiencer of emotions and haver of human connections.) So I agree that there is a danger of thinking too much of oneself as some sort of ubermensch do-gooder, but the question of to what extent impact varies by person or action is separate.
2
Sarah Weiler
I think that makes sense and is definitely a take that I feel respect (and gratitude/hope) for. Even after a week of reflecting on the empirical question - do some people have magnitudes higher impact than others? - and the conceptual question - which impact evaluation framework (counterfactual, Shapley value attribution, something else entirely) should we use to assess levels of impact? -, I remain uncertain and confused on my own beliefs here (see more in my comment on the polio vaccine example above). So I'm not sure what my current response to your claim "[it's] probably true that some people's lives will have far greater instrumental effects on the world than others" is or should be.
2
OscarD🔸
(emphasis added) Perhaps this is a strawman of your position, but it sounds a bit like you want to split actions into basically three buckets: negative, approximately neutral, and significantly positive. This seems unhelpful to me, for several reasons: * I think it is uncontroversial that at least on the negative side of the scale some actions are vastly worse than others, e.g. a mass murder or a military coup of a democratic leader, compared to more 'everyday' bads like being a grumpy boss. * It feels pretty hard to know which actions are neutral, for many of the reasons you say that the world is complex and there are lots of flow-through effects and interactions. * Identifying which positive actions are significantly so versus insignificantly so feels like it just loses a lot of information compared to a finer-grained scale.
1
Sarah Weiler
Agreed! I share the belief that there are huge differences in how bad an action can be and that there's some relevance in distinguish between very bad and just slightly bad ones. I didn't think this was important to mention in my post, but if it came across as suggesting that we basically should only think in terms of three buckets, I clearly communicated poorly - I agree that this would be too crude. Strongly agreed! I strongly share the worry that identifying neutral actions would be extremely hard in practice - took me a while to settle on "bullshit jobs" as a representative example in the original post, and I'm still unsure whether it's a solid case of "neutral actions". But I think for me, this uncertainty reinforces the case for more research/thinking to identify actions with significantly positive outcomes vs actions that are basically neutral. I find myself believing that dividing actions into "significantly positive" vs "everything else" is epistemologically more tractable than dividing them into "the very best" vs "everything else". (I think I'd agree that there is a complementary quest - identifying very bad actions and roughly scoring them on how bad they would be - which is worthwhile pursuing alongside either of the two options mentioned in the last sentence; maybe I should've mentioned this in the post?) I think I disagree mostly for epistemological reasons - I don't think we have much access to that information at a finer-grained scale; based on that, giving up on finding such information wouldn't be a great loss because there isn't much to lose in the first place. I think I might also disagree from a conceptual or strategic standpoint: my thinking on this - especially when it comes to catastrophic risks, maybe a bit less for global health & development / poverty - tends to be more about "what bundle of actions and organisations and people do we need for the world to improve towards a state that is more sustainable and exhibits higher wellbeing (/
1
OscarD🔸
Yes I think that makes sense. I think for me the area where I am most sympathetic to your collective rationality approach is voting, where as you noted elsewhere the 80K narrow consequentialist approach is pretty convoluted. Conversely, the Categorical Imperative, universalisability perspective is very clear that voting is good, and thinking in terms of larger groups and being part of something is perhaps helpful here. So yes while I still generally prefer the counterfactual perspective, I am probably not fully settled there. I suppose in theory being part of a loose collective like EA focused on impact could mean that individual donation choices matter less if my $X to org Y means someone else will notice Y is better funded and give to a similarly-impressive org Z. I think in practice there is enough heterogeneity incause prioritization this may not be that large an effect? Perhaps within e.g. global health though it could work, where donating directly to any GiveWell top charity is similar to any other as GiveWell might make up the difference.
1
OscarD🔸
An overarching thought, not responding to any particular quote from you: I think lots of people in the world (the vast majority in fact!) don't really think about impartial altrusitic impact, let alone maximising it. If this is right, I think it would be a priori not so surprising if there are lots of high-impact opportunities left on the table by most people, waiting for ~EAs to action. Perhaps the clearest case here is something like shrimp or insect welfare. By some lights at least this is very high impact, but it makes sense it wasn't already being worked on because primarily only people with an ~EA mindset would be interested in it.
2
Sarah Weiler
[The thoughts expressed below are tentative and reveal lingering confusion in my own brain. I hope they are somewhat insightful anyways.] This seems on-point and super sensible as a rough heuristic (not a strict proof) when looking at impact through a counterfactual analysis that focuses mostly on direct effects. But I don't know if and how it translates to different perspectives of assessing impact. If there never were high impact opportunities in the first place, because impact is dispersed across the many actions needed to bring about desired consequences, then it doesn't matter whether a lot or only a few people try to grab these opportunities from the table - because there would be nothing to grab in the first place.  Maybe the example helps to explain my thinking here (?): If we believe that shrimp/insect welfare can be improved significantly by targeted interventions that a small set of people push for and implement, then I think your case for it being a high impact opportunity is much more reasonable than if we believe that actual improvements in this area will require a large-scale effort by millions of people (researchers, advocates, implementers, etc). I think most desirable change in the world is closer to the latter category.*  *Kind of undermining myself: I do recognise that this depends on what we "take for granted" and I tentatively accept that there are many concrete decision situations where it makes sense to take more for granted than I am inclined to do (the infrastructure we use for basically everything, many of the implementing and supporting actions needed for an intervention to actually have positive effects, etc), in which case it might be possible to consider more possible positive changes in the world to fall closer to the former category (the former category ~ changes in the world that can be brought about by a small group of individuals).
1
OscarD🔸
Yes, I think this issue of how many people you need to get on board with the vision/goals to make some change happen is key (and perhaps a crux). I agree the number of people needed to implement a change might be huge (all the farm workers making changes for various animal welfare things) but think we probably don't need to get all of them to care a lot more about nonhumans to get the job done. So in my view often a small-ish set of people advocate for/research/fund/plan some big change, and then lots of people implement it because they are told to/paid to.
1
OscarD🔸
Footnote 5 predicted perfectly the sort of thing I was going to say in response. You probably know more economics than I do, but I feel like there are some models of how markets work that quite successfully predict macro behaviour of systems without knowing all the local individual factors? E.g. re your suggestion that nurses are a large fraction of the 'highest impact' career paths, I think we could run some decent calculations about the elasticity of the nursing labour market to find how many more nurses there will overall be if I decide to be a nurse in some particular place. Me being a nurse increases labour supply, marginally reducing wages in expectation, reducing the number of other people who choose to be nurses; this effect may be quite different in different professions, e.g. if there is a cap of X places in some government medical certification program and lots of people apply, as with medical school in India, then joining that profession may increase the total supply of doctors very little. So I suppose I am still more optimistic than you that we can make, in some cases, simple models that accurately capture some important features of the world.
2
Sarah Weiler
You're right that you're more optimistic than me for this one. I don't think we have good models of that kind in economics (or: I haven't come across such models; I have tried to look for them a little bit but am far from knowing all modeling attempts that have ever been made, so I might have missed the good/empirically reliable ones). I do agree that "we can make, in some cases, simple models that accurately capture some important features of the world" - but my sense is that in the social sciences (/ whenever the object of interest is societal or human), the features we are able to capture accurately are only a (small) selection of the ones that are relevant for reasonably assessing something like "my expected impact from taking action X." And my sense is also that many (certainly not all!) people who like to use models to improve their thinking on the world over-rely on the information they gain from the model and forget that these other, model-external features also exist and are relevant for real-life decision-making.
1
OscarD🔸
Makes sense, I think I don't know enough to continue this line of reasoning that sensibly!

I gave this a downvote for the clickbait title which from the outline doesn't seem to match the actual argument. Apologies if this seems unfair, titles like this are standard in journalism, but I hope this doesn't become standard in EA as it might affect our epistemics. This is not a comment on the quality of the post itself.

5
Sarah Weiler
I appreciate the sentiment and agree that preventing clickbaity titles from becoming more common on the EA forum is a valid goal! I'd sincerely regret if my title does indeed fall into the "does not convey what the post is about" category. But as Jeff Kaufman already wrote, I'm not sure I understand in which sense the top-level claim is untrue to the main argument in the post. Is it because only part of the post is primarily about the empirical claim that impact does not differ massively between individuals?
4
Chris Leong
It's fine to mention other factors too, but the claim (at least from the outline) seems to be that "it's hard to tell" rather than "there are no large differences in impact". Happy to be corrected if I'm wrong.
5
Jeff Kaufman 🔸
The standard EA claim is that your decisions matter a lot because there are massive differences in impact between different altruistic options, ex ante. The core claim in this post, as I read it, is that this is not true because for there to be massive differences ex ante we would (a) need to understand the impact of choices much better and (b) we would need to be in a world where far fewer people contribute to any given advance.
2
Chris Leong
"Is that this is not true because for there to be massive differences ex ante we would (a) need to understand the impact of choices much better" - Sorry, that's a non-sequitur. The state of the world is different from our knowledge of it. The map is not the territory. "X is false" and "We don't know whether X is true or false" are different statements.
2
Owen Cotton-Barratt
(While I don't think that the argument in the post does enough to support the conclusion in the title,) I think this is a case where the map is the important thing: when making decisions, we have to use ex ante impact (which depends on a map; although you can talk about doing it with respect to a better map than you have now) rather than ex post (which would be the territory). This is central enough that I think it's natural to read claims about the distribution of impact as being about the ex ante distribution rather than the ex post one.
2
Chris Leong
I can see why this might seem like an annoying technicality. I still think it's important to be precise and rounding arguments off like this increases the chances that people talk past each other.
9
Sarah Weiler
Wasn't quite sure where best to respond in this thread, hope here makes decent sense. I did actually seek to convey the claim that individuals do not differ massively in impact ex post (as well as ex ante, which I agree is the weaker and more easily defensible version of my claim). I was hoping to make that clear in this bullet point in the summary: "I claim that there are no massive differences in impact between individual interventions, individual organisations, and individual people, because impact is dispersed across [many actions]". So, I do want to claim that: if we tried to apportion the impact of these consequences across contributing actions ex post, then no one individual action is massively higher in impact than the average action (with the caveat that net-negative actions and neutral actions are excluded; we only look at actions that have some substantial positive impact). That said, I can see how my chosen title may be flawed because a) it leaves out large parts of what the post is about (adverse effects, conceptual debate); and b) it is stronger than my actual claim (the more truthful title would then need to be something like "There are probably no massive differences in impact between individuals (excluding individuals who have a net-negative or no significant impact on the world)"). I am not sure if I agree that the current title is actively misleading and click-baity, but I take seriously the concern that it could be. I'll mull this over some more and might change the title if I conclude that it is indeed inappropriate.  [EDIT: Concluded that changing the title seems sensible and appropriate. I hope that the new title is better able to communicate fully what my post is about.]  I'm obviously not super happy about the downvote, but I appreciate that you left the comment to explain and push me to reconsider, so thank you for that.
2
Jeff Kaufman 🔸
I agree it would be better if the post explicitly compared the ex-ante and ex-post ways of looking at impact, but I don't think it's reasonable to expect the post make this distinction in its title.
2
Chris Leong
I suppose at this stage it's probably best to just agree to disagree.
2
Jeff Kaufman 🔸
I guess, though judging by the votes on your "I gave this a downvote for the clickbait title" it seems to me that a lot of us think you're being unfair to the author.
3
Chris Leong
I'm perfectly fine with holding an opinion that goes against the consensus. Maybe I could have worded it a bit better though? Happy to listen to any feedback on this.
2
Owen Cotton-Barratt
Yeah, I'd often be happier with people being clearer about whether they mean ex ante or ex post. But I do think that when people are talking about "distribution of impact" it's more important to clarify if they mean ex post (since that's less often the useful meaning) than if they mean ex ante.
2
Chris Leong
Sorry, I misread the definition of ex ante. I agree that the post poses a challenge to the standard EA view. I don't see "There are no massive differences in impact between individuals" as an accurate characterization of the claim the argument is showing.  "There are no massive ex ante differences in impact between individuals" would be a reasonable title. Or perhaps "no massive identifiable differences"?
5
Jeff Kaufman 🔸
I think the title does match the argument? I understand the post is claiming that in as much as it is possible to evaluate the impact of individuals or decisions, as long as you restrict to ones with positive impact the differences are small, because good actions tend to have credit that is massively shared.
2
Chris Leong
"I understand the post is claiming that in as much as it is possible to evaluate the impact of individuals or decisions, as long as you restrict to ones with positive impact the differences are small, because good actions tend to have credit that is massively shared." - There's a distinction between challenges with evaluating differences in impact and whether those impacts exist. The other two arguments listed in the outline are: "Does this encourage elitism"? and a pragmatic argument that individualized impact calculations are not the best path of action. None of these are the argument made in the title.
Curated and popular this week
Relevant opportunities