Hide table of contents


The second bullet point featured in the website introduction to effective altruism is the ITN framework. This exists to prioritize problems. The framework does so by considering the Importance — or scale, S — of a problem, as the number of people or quality-adjusted life years (QALYs) affected, multiplied with the Tractability, as the potential that this problem can be addressed, and Neglectedness, as the number of people already working to address this problem (ITN-framework, including Leverage). Tractibility is sometimes also called Solvability, and non-neglectedness crowdedness.

Some criticisms and difficulties in interpreting the framework (1, 2, 3, 4) have preceded this forum post.  The ITN framework can be interpreted - as also in the final paragraph of (1) - such that IT represents the potential that a problem can be addressed, while ITN considers the difference that any one individual can make to that problem, particularly the next individual. How much impact can the next individual make, choosing to work on this problem, on average? Why do I add “on average”? We are still ignoring the person’s unique qualities, and instead more abstractly consider an average person. Adding “personal fit” as another multiplicative factor would make it personal as well.

So “How much impact can the next individual make on this problem?” really asks for the marginal counterfactual impact. Respectively this is the amount of impact that this one individual adds to the total impact so far, which would not happen otherwise. The ITN-factor Neglectedness assumes that this marginal counterfactual impact is declining — strictly — as more individuals join the endeavor of addressing the particular problem. If this is true, then — indeed — a more neglected problem ceteris paribus — i.e. not varying factors I, T (or personal fit) simultaneously — always yields more impact when fewer individuals are already addressing it. This is however not always true, as also already pointed out in the criticisms referenced above.

Consider the following string of examples.

Suppose a partial civilizational collapse has occurred, and you consider whether it would be good to go and repopulate the now barren lands. The ITN-framework says that as the first person to do so you make the biggest difference. However, alone you cannot procreate, at least not without far-reaching technological assistance. In fact a sizable group of people deciding to do so might very well still be ineffective, by not bringing in sufficient genetic diversity. This is captured by a well-known term in population biology: the critical or minimally viable population size (to persist). Something similar operates to a lesser extent in the effectiveness of teams. I for example once found the advice to better not join a company as the sole data scientist, as you would not have a team to exchange ideas with. Working together, you become more effective, and develop more. Advocating for policies is another area that is important and where you need teams. Consider there being multiple equally worthwhile causes to protest for, but by the logic of the ITN-framework you always join the least populated protest. And no critical mass is obtained. Doesn’t that seem absurd? See also (5). (And the third image in (3), depicting a one-time significant increase in marginal counterfactual impact, as with a critical vote to establish a majority. This graph is also called an indicator function). Effective altruists might similarly often find themselves advocating for policies which are neglected and that are thus not well known to the recipient of such advocacy. As opposed to maximally spread out policy advocacy for all the (equally worthwhile) neglected causes, a focused effort for a single worthwhile cause might instead be more effective to capture a policymaker’s attention and to then actually bring about a change.

In the spirit of the last example, the above-discussed shortcomings in the factor Neglectedness might have consequential implications, undercutting our effectiveness. An implication might be that in our effective altruism culture we are excessively open-minded and entrepreneurial, doomed to endlessly wander to find the next big new (and thus utterly neglected) thing, as opposed to exploiting the valid opportunities that are available right in front of us. In the well-known explore-exploit tradeoff there is a proper place for both, and overemphasizing Neglectedness might lead one to calibrate their balance incorrectly. This potential implication is just one example of a potential implication. The “'gloomy' takeaway” below can be another one and I will consider more in the future.

In closing 

I will now analyse one additional example in more depth and then propose a solution to the problem, which is to, following Will MacAskill’s What We Owe the Future”: replace “Neglectedness” with “Leverage”; and ITN with ITL.

In my previous forum post I remarked to the end that as more existential risk is mitigated, and thus the problem of existential risk less neglected, the expected number of future lives increases. And thus: the less neglected risks to future lives are, the more important these future lives become. Or in other words, the more numerous. Their Importance or Scale increases in the sense of the first factor of ITN-framework, as Neglectedness decreases. So if the mitigation of existential risk remains just as hard, or does not become harder fast enough (compared to the increase in Importance), the marginal counterfactual impact is actually strictly increasing with the number of people working on this problem, instead of decreasing. In reality, one of the two is probably true in one set of circumstances and the other in others. A mixed bag. 

The analysis of this example raises an interesting dynamic and conclusion. We observed that past existential risk mitigation enables current existential risk mitigation to a same degree to be more impactful. So to say, they can build or stand on the shoulders of giants — a phrase often used in the context of science. As such, to maximize impact, it can make sense to (wait and) join an endeavor when it is more developed — and thus less neglected. Again, see also (5). This is in direct contradiction to the somewhat 'gloomy' takeaway Benjamin Todd gave Dutch author Pepijn Vloemans at EAG London, which became the closing remark of his Dutch article: "if effective altruism grows and problems become less neglected, it will in the future be harder and harder to make a difference, but that is what progress looks like."

You have now almost reached the end of this post, and it is time to offer a solution to the problems I raised. In Appendix 3 of Will MacAskill’s What We Owe the Future, he introduces “Leverage” as an alternative word to “Neglectedness” (pages 256-257). Throughout the pages he, from what I have so far gathered, does not however advocate for an all-out replacement of Neglectedness with Leverage. And nor does he in (1). I do, and I would like to do so here. The alternative word for the third factor in the framework does not have the above-mentioned shortcomings that Neglectedness has, and it can (be made to) refer to exactly what we want it to refer to: marginal counterfactual impact, thus thereby specifying the difference that one person can make (ITL) to a particular problem (itself valued with the product IT).

The alternative of Leverage can, it seems to me, be presented just as easily. And Leverage can prevent many potential misunderstandings (or false understandings) and erroneous calibrations that Neglectedness can cause. Having read this post, I hope you think so too, and will leverage Leverage. And agree to do good and neglect Neglectedness more, and perhaps, Neglectedness can, as a relic from the past, still be made useful somewhere as well.

 

Acknowledgements

I acknowledge and thank a few informal conversations I've had with some EAs, and the previous, linked materials that are adjacent in content.

52

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Basically the problem with neglectedness is that it assumes strictly declining returns like a logarithmic scale. But if the problem has the quality "the more the merrier", or increasing returns to scale, then neglectedness is a problem. In other words, leverage matters.

In addition, we might also want to use - and take in account - our abilities to look ahead. Suppose for example a worthwhile task that requires two people to engage in it. The first person to engage in it gains zero marginal returns, while the latter gets everything (all of the returns as marginal returns). The first person might however predict the second person's behavior and based on the expectation that results engage with this task. By contrast, chimpanzees are not able to do this; You would never see two of them cooperate to together e.g. carry a log (research by Joseph Henrich).

I was planning to write almost exactly the same post! I think the leverage formula is a great gem hidden in WWOTF's appendices.

Having said that, my understanding is it still doesn't really capture S-curves, which seem to be at the root of your and all the other criticisms of neglectedness, and I would argue they apply almost everywhere. In addition to the examples you gave, global poverty interventions are only as high EV as they are because people did a bunch of work putting together the data that early Givewell/GWWC researchers collated (and their collation was only valuable because they then found people to donate based on it). Technical AI safety research might eventually become high value, but my (loose) understanding is it's contributing very little if anything to current AI development. Marginal domestic animal welfare interventions seem to be pretty good, while marginal wild animal welfare interventions are still largely worthless. Climate change  work might have reached diminishing marginal value, but even that still seems like a contested question, and that's about the 'least neglected' area anyone might consider an EA cause.

It seems to be very hard either to define a mathematical 'default' for an S-curve or to think about counterfactually (how should we counterfactually account for the value of early contributors of an S-curve that ultimately takes off?). But IMO, these are problems to be solved and tradeoffs to be made, not a reason to keep applying neglectedness like there's no better alternative. 

Curated and popular this week
Relevant opportunities