Hide table of contents

With this post I wanted to ask a fairly basic question of the EA community that I've been scratching my head over.

Is Effective Altruism undervaluing the net impact of repairing traditional impact problem areas (i.e. global dev) compared to focusing on new or unaddressed problem areas?

I think that this forum in general could use more imagery / graphics, and so I'll attempt to make my point with some graphs.

Consider first this graph, with 'Amount of Capital Distributed' on Y-axis and 'Efficiency of Impact' on the X-axis:

This is how I imagine some might view the social sector, which is to say every single organization or cause addressing every single impact area, placed on a spectrum. At the beginning of the curve, down and to the left, we see that there is a smaller amount of capital circulating through approaches that aren't that effective. In the middle of the curve we see the bulk of approaches, with moderate impact and the most amount of capital at play. And finally to the right we start to see approaches that would fall under the banner of Effective Altruism. They wield less capital than traditional sources of impact, but are quite impactful in doing so.

The logic behind the slope of this curve is that there is a certain Overton window of altruism. Approaches that are too regressive will start to leave the window and receive less capital. Approaches that are at the peak of society's attention will receive the most support. Those at the bleeding edge (EA) will only be perceptible by a small subset of the population and receive smaller levels of support.

Once this basic curve is established we can look at what we actually know about the impact landscape and start to refine the graph.

This next graph ditches the curve and instead introduces a bar chart. The same basic comparison of Capital vs. Impact still applies. Here the main difference is that different approaches don't exist on a spectrum and instead are discrete.

This might seem like a minor discrepancy but it reveals an important point about how causes are funded. If anything, Effective Altruism shows us that any action can have various degrees of impact, in many different ways and in different categories. These relationships are incredible messy. At the same time, capital, especially philanthropic capital, is rarely distributed proportional to impact and agnostic of problem areas. Matter of fact, the opposite is probably true. First, donors commonly pick a problem area and set of organizations that they are personally swayed by, and then make isolated donations within this category with the hope that they can achieve impact. Even foundations such as the Rockefeller Foundation that are devoted to broad goals like "promoting the well-being of humanity throughout the world" have focus areas and pet issues that they like to fund more than others.

So ultimately a better way to think about the distribution relationship between impact and capital is probably not a nice smooth curve, but around specific chunks of capital related to cause or problem areas (even if in reality it doesn't quite work like this).

Furthermore, the key in addressing the altruistic capital markets via chunks instead of as a continuous impact curve is that you begin to see the orders of scale that separates different categories:

Here we see several categories of impact, charted via their exact annual expenditure levels and loose ranking of their QALY/$ levels. Despite not having the ability to make accurate estimations of QALY/$ levels, the difference in magnitude between these categories in terms of expenditures hopefully is clear. Even taking the most generous estimation of the annual expenditures of explicitly-EA causes (~$500M), we see that this is a drop in the bucket compared to the >$100 Billion that just the UN System and the large NGO BRAC use each year.

This brings me to my central point and question for the EA community. Is there an argument to be made for focusing more efforts on more efficiently retooling these large sources of capital towards positions that would be EA-aligned?

I would imagine some objects to this argument might be:

- The whole idea of x-risks is that pouring even just a little attention and money into them can help mitigate catastrophic risks that would otherwise happen under business as usual. This is true even if there are more superficially pressing problems to deal with in the world like poverty.

- Focusing efforts into already addressed problem areas doesn't just immediately yield clear impact, and could actually prove a futile activity.

- EA aligned organizations like Evidence/Action and the various U.S. Policy Reform projects are in fact already addressing 'traditional' impact areas, but just the ones that have the highest upside potential.

I think all these points would be valid, but I want to raise some counterpoints that I think make the broad argument here still worthwhile to explore.

1. Even addressed problems can be addressed inefficiently

A common line of thinking when evaluating EA-friendly causes is to determine which causes have the least amount of attention placed on them. Past the potential biases that come about when you go about the world looking for problems, I worry about this approach's emphasis on novelty.

It seems like there's not enough emphasis on the quality of funding and attention being placed on an issue, compared to the quantity of funding and attention.

For climate change, I think the EA justification of not spending time and resources on this problem makes sense. Even if the problem carries catastrophic consequences, there is quite a lot of fairly high quality research and development being done here, both from for profit and non profit perspectives.

For global dev broadly speaking, and for sub categories like global health, most of EA's engagement seems to be around a set of interventions that have stacked QALY/$ ratios like early-life affordable healthcare. Past this though I get the impression that other sub categories of aid are written off as not worthy of attention because they are already being addressed. This is understandable, as we see from the chart above that there is a large amount of capital that goes towards humanitarian causes.

But despite the hundreds of billions of dollars that flow through aid each year, it's unclear how impactful this aid is. Obviously an argument can be made towards the short term effectiveness of providing services for truly acute humanitarian crises. But long term perspectives like those contained within Dead Aid state that aid is fundamentally harmful. Moderate positions state at least that there needs to be better linkages between interventions and their long term impact.

EAs have shown a slight interest via orgs like Evidence/Action to try to improve the effectiveness of traditional aid approaches, but I think that this is a problem that is worthy of at least as much attention as reforming political institutions. If it is in fact the case that there are glaring inefficiencies in this sector, and that trillions of dollars are locked up pursuing this inefficient work, fixing these problems could prove to have massive upside. First and foremost though it seems imperative to at least get a better sense of how effective these grandfathered capital chunks are.

2. There are numerous advantages of better integrating EA community with rest of social sector

Another upside of working to improve causes that might otherwise be viewed as being already addressed is that it forces greater interaction between the EA community and the rest of the social sector.

Before learning about Effective Altruism I was working for a social enterprise that worked with household name foundations on a variety of causes. Even at its relatively small stage of growth several years ago, I was surprised to see that such a robust community was forming around doing the most good possible. But what was most surprising about the EA community wasn't just how active it was, it was how discrete it was from the world I was working in, despite having essentially the same goals.

Moreover, I was increasingly seeing a movement in Foundation World towards better frameworks around understanding and reporting on net impact. While EA takes this idea to an extreme I didn't understand why this community needed to be so removed from the conversations (and access to capital) that were simultaneously happening in other parts of the social sector.

Besides avoiding the duplication of efforts I think there are valuable lessons that the EA community and the other impact-chasers could learn from one another. For example, EAs are uniquely good at understanding the role of technology in the future, which is a notorious weakness of traditional social sector folks. On the other hand, I think social sector folks could teach a thing or two to EAs about how programs work 'in the field' and what philanthropy looks like outside of the ivory tower that EAs can sometimes sit in.

Finally, I was reading somewhere on this forum recently a post that was about how EA is a set of beliefs and approaches, and shouldn't aspire to be a group or movement (can't find the post). I agree with this sentiment, but at this point Effective Altruism as a movement is a runaway train.

Part of embracing this reality means understanding better the role of optics, and how public perception affects EA's overarching goals. Maybe at the moment the EA philosophy isn't quite 'mainstream,' and maybe this monolithic status is a naive goal to reach. But speaking practically, the more people who operate under the banner of EA, the more good can be done in the world. This process entails both attracting new members towards what EA stands for today, but also being more integrative with communities that wouldn't traditionally align themselves EA. Wanting to do the most good possible is truly an agnostic trait. EA as a movement should be equally agnostic about not just what causes it considers, but what tribes it aligns itself with.

3. A vast amount of philanthropic capital in the world is and will always be distributed 'irrationally,' EA has much to gain by embracing and working around this.

As discussed in #1 and 2 there is no shortage of problem areas that are being approached imperfectly, at least relative to the benchmarks of Effective Altruism. A large part of this no doubt is that global impact is not usually product of the pure (rational, selfless) definition of altruism. Among other things, people donate to causes they personally feel attached to. There is a deep psychological (evolutionary, likely) mechanism that underpins this, one that probably won't be changing any time soon.

In the eyes of EAs, these imperfect causes don't always seem to have tangible connections to impact, and as a result this community doesn't engage with them. This disengagement makes sense for some 'warm glow' forms of altruism that have structural barriers in place preventing them from ever becoming more efficient. But for other forms of impact, just because they are inefficient now doesn't mean they can't improve.

Engaging with these causes further (once again, a good example being global development) stands as a way to not only create impact, but to embrace the irrationality of giving and effectively expand effective altruism in larger capital markets.

Conclusion

Even if they come across as not traditionally aligned with EA values, there are lots of problem areas, namely global development, that could benefit from an increase in analytical rigor.

Vice versa, EA could benefit from tapping into these larger capital pools and potentially converting them into higher impact brackets:

Currently Open Phil. lists no current focus areas in global health and development. It only recommends individuals make high impact donations to high impact charities like those pursuing deworming and anti-malaria.

I think that there is potential for a problem area to be loosely built around meta effectiveness of the development sector.

This isn't a novel concept, and there is already a nascent movement in this sector towards leaner operating strategies.

Engaging with this space could not only reveal further high impact problems to work on, but also comes with numerous strategic side benefits such as helping to reframe narratives that EAs aren't interested systemic change and that they exist in an elitist bubble.


Edit 1: Changed title of post from "Doing Repairs vs. Buying New" to "Benefits of EA engaging with mainstream (addressed) cause areas"

Note: This is my first post in the EA forum. I attempted to the best of my ability to research the points that were made here to make sure I wasn't saying anything too redundant. Apologies in advance if this is the case.

I'm interested in talking with people here more about formalizing this issue. I'm also looking for some volunteer projects. I have a background in design, marketing, strategy, and experience in the tech and philanthropy/foundation spaces. Please reach out if we can work with each other!

@bryanlehrer www.bryanlehrer.com

31

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since:

I think we should be narrower on what concrete changes we discuss. You've mentioned "integration, "embracing and working around"... what does that really mean? Are you suggesting that we spend less money on the effective causes, and more money on the mainstream causes? That would be less effective (obviously) and I don't see how it's supported by your arguments here.

If you are referring to career choice (we might go into careers to shift funding around) I don't know if the large amount of funding on ineffective causes really changes the issue. If I can choose between managing $1M of global health spending or $1M of domestic health spending, there's no more debate to be had.

If you just mean that EAs should provide helpful statements and guidance to other efforts... this can be valuable, and we do it sometimes. First, we can provide explicit guidance, which gives people better answers but faces the problems of (i) learning about a whole new set of issues and (ii) navigating reputational risks. Some examples of this could be Founders' Pledge report on climate change, and Candidate Scoring System. As you can see in both cases, it takes a substantial amount of effort to make respectable progress here.

However, we can also think about empowering other people to apply an EA toolkit within their own lanes. The Future Perfect media column in Vox is mostly an example of this, as they are looking at American politics with a mildly more EA point of view than is typical. I can also imagine articles along the lines of "how EA inspired me to think about X" where X is an ineffective cause area. I'm a big fan of spreading the latter kind of message.

Note: I think your argument is easy enough to communicate by merely pointing out the different quantities of funding in different sectors, and trying to model and graph everything in the beginning is unnecessary complexity.

Thanks for your comments, kbog!

The idea behind the post was not to advocate for spending more money on ineffective causes, at least not in the form of donations.

(Let's go with global dev as example problem area) I think providing guidance begins to paint the picture of what I'm advocating for. But something like Vox Newsletters aren't an adequate way to study the effectiveness of global dev. The real issue at hand is what the upside of formal organization around analyzing dev effectiveness could be, i.e. a Center for Election Science for development, or Open Phil announcing a dev Focus Area.

First and foremost, I think that there is a high upside to simply studying what the current impact of the dev sector is. This was the idea behind bringing up the orders of magnitude of difference between EA and dev earmarked capital. It's not about deciding where a new donation goes. Nor is accurate to frame it as deciding between managing '$1mil in domestic versus global health'. The reality is that there is trillions of dollars locked within dev programs that often have tenuous connections to impact. Making these programs just 1% percent more efficient could have massive impact potential relative to the small amount of preexisting capital EA has at play.

The broader point behind addressing these larger capital chunks, and working directly on improving the efficiencies of mainstream problem areas is that the Overton window model of altruism suggests that people will always donate to 'inefficient' charities. Instead of turning away from this and forming its own bubble, EA might stand to gain a lot by addressing mainstream behaviors more directly. Shifting the curve to the right instead of building up from scratch might be easier.

Thanks for writing this and making clear points. I think it helps the quality of discourse in these areas.

A couple potential downsides:

Mimesis means that integrating with traditional philanthropy makes us more likely to also adopt the same blind spots that prevent them from seeing order of magnitude improvements. Correlated strategies -> correlated results.

Involvement in existent niches usually means fighting with the organisms already exploiting that niche. Even if you try to explain that you're helping them. See pulling the rope sideways.

One writing point: I almost skipped this post due to the title. Maybe something more direct?

I've currently been compiling a list of lessons EA has that seem applicable over a wide array of cause areas/ charities. It seems like it is relevant to this post, so here's a link:

https://docs.google.com/document/d/18phuLs60GGlNIRh85D0y4YA3BXTLfy_1m0D2jj7SZDo/edit

(as of now it is still in it's infancy, but I'm planning to continue working on it)

I was reading somewhere on this forum recently a post that was about how EA is a set of beliefs and approaches, and shouldn't aspire to be a group or movement.

I think of EA as a culture. IIUC there was a community called Overcoming Bias which became LessWrong, a community based on a roughly-agreed-upon set of axioms and approaches that led to a set of beliefs and a subculture; EA branched off from this [edit: no, not really, see replies] to form a closely related subculture, which "EA organizations" represent and foster.

It seems to be a "movement" because it is spreading messages and taking action, but I think its "movement-ness" is a natural consequence of its founding principles rather than a defining characteristic. Interestingly, I discovered EA/rationalism somewhat recently, but my beliefs fit into EA and rationalism like a hand in a glove. Personally, I am more attracted to being in an "EA culture" than an "EA movement" because I previously felt sort of like the only one of my kind - a lonely situation!

[Addendum:] I think this post is making a great point, that there is good to be done by, for example,

  • EAs learning practical lessons from other organizations
  • EAs promoting straightforward techniques for figuring out how to do good effectively
  • EAs making specific suggestions about ways to be more effective

But I also think that, if you want to do more than simply donate to effective charities - if you want to participate in EA culture and/or do "EA projects" - there is a lot to learn before you can do so effectively, especially if you aren't already oriented toward a scientific way of thinking. This learning takes some time and dedication. So it seems that we should expect a cultural divide between EAs (or at least the "core" EAs who use EA approaches/beliefs on a day-to-day basis) and other people (who might still be EAs in the sense of choosing to give to effective charities, but never immerse themselves in the culture.)

[P.S.] Since you mentioned optics, I wonder if this divide might be better framed not as a "cultural" divide, but an "educational" divide. We don't think of "people with science degrees" as being in a "different culture" than everyone else, and I'm basically saying that the difference between core EAs and altruistic non-EAs is a matter of education.

[On the other hand, in my mind, EA feels tied to rationalism because I learned both together - and rationalism is more than an ordinary education. (The rationalist in me points out that the two could be separated, though.) There are scientists who only act like scientists when they are in the lab, and follow a different culture and a different way of thinking elsewhere; more generally, people can compartmentalize their education so that it doesn't affect them outside a workplace. Rationalism as promoted by the likes of Yudkowsky explicitly frowns on this and encourages us to follow virtues of rationality throughout our lives. In this way rationalism is a lifestyle, not just an education, and thinking of EA the same way appeals to me.]

[anonymous]6
0
0

A quick note on 'EA branched off from [LessWrong] to form a closely related subculture': this is a little inaccurate to my knowledge. In my understanding, EA initially came together from 3 main connected but separate sources: GiveWell, Oxford philosophers like Toby Ord and Will MacAskill and other associated people like Rob Wiblin, Ben Todd etc, and LessWrong. I think these 3 sources all interacted with each other quite early on (pre-2010), but I don't think it's accurate to say that EA branched off from LessWrong.

Thanks. Today I saw somebody point to Peter Singer and Toby Ord as the origin of EA, so I Googled around. I found that the term itself was chosen by 80000 hours and GWWC in 2012.

In turn, GWWC was founded by Toby Ord and William MacAskill (both at Oxford), and 80,000 hours was founded by William MacAskill and Benjamin Todd.

(incidentally, though, Eliezer Yudkowsky had used "effective" as an adjective on "altruist" back in 2007 and someone called Anand had made a "EffectiveAltruism" page in 2003 on the SL4 wiki; note that Yudkowsky started SL4 and LessWrong, and with Robin Hanson et al started OvercomingBias.)

I thought surely there was some further connection between EA and LessWrong/rationalism (otherwise where did my belief come from?) so I looked further. This history of LessWrong page lists EA as a "prominent idea" to have grown out of LessWrong but offers no explanation or evidence. LessWrong doesn't seem to publish the join-date of its members but it seems to report that the earliest posts of "wdmacaskill" and "Benjamin_Todd" are "7y" ago (the "Load More" command has no effect beyond that date), while "Toby_Ord" goes back "10y" (so roughly 2009). From his messages I can see that Toby was also a member of Overcoming Bias. So Toby's thinking would have been influenced by LessWrong/Yudkowskian rationalism, while for the others the connection isn't clear.

[anonymous]4
0
0

Thanks for the additional research. I can add a few more things:

'Carl Shulman' commented on the GiveWell blog on December 31, 2007, seemingly familiar with GiveWell and having a positive impression of it at the time. This is presumably Carl Shulman (EA forum user Carl_Shulman), longtime EA and member of the rationality community.

Robert Wiblin's earliest post on Overcoming Bias dates back to June 22, 2012.

The earliest post of LessWrong user 'jkaufman' (presumably longtime EA Jeff Kaufman) dates back to 25th September 2011.

There's some discussion of the history of EA as connected with different communities on this LessWrong comment thread. User 'thebestwecan' (addressed as 'Jacy' by another comment, so presumably Jacy Reese) stated that the term 'Effective Altruism' was used several years in the Felicifia community before CEA adopted the term, but jkaufman's Google search could only find the term going back to 2012. This comment is also interesting:

'lukeprog (Luke Muehlhauser) objects to CEA's claim that EA grew primarily out of Giving What We Can at http://www.effectivealtruism.org/#comments :

This was a pretty surprising sentence. Weren’t LessWrong & GiveWell growing large, important parts of the community before GWWC existed? It wasn’t called “effective altruism” at the time, but it was largely the same ideas and people.'


So apparently Luke Muehlhauser, an important and well connected member of the rationality community, believed that important parts of the EA community came from LW and GW before GWWC existed. This seems to exclude the idea that EA grew primarily out of LW.

Overall it seems to me that my earlier summary of EA growing out of the connected communities of GiveWell, Oxford (GWWC, people like Toby Ord and Will MacAskill etc), and LessWrong is probably correct.

Interesting post. Thank you for writing it. Attractive graphs.

I wonder if there could be a kind of "trip advisor" type badge to recommend how well charities/interventions are doing in such a way as to encourage them to improve.

You mention it, but a key strength and issue is that EA is exclusive. It only wants to the the most good, so it only recommends the best charities, but it therefore doesn't encourage middling charities/interventions to be better.

There is a hard question here which is, does EA want those charities to get better or does it want them to end? Do we look down on individuals and organisations backing or using inefficient approaches, have we becomes something akin to a purity cult? To do so might be unreasonable since refusing to engage with successful middle-efficiency highly-backed approaches could be a failure to improve them and do more good.

The real kicker then, I think is, do you get more good per $ by increasing the high end or shifting the graph to the right? Has anyone done any research on this? However, it seems relatively useful to not become sneery/superior towards middle-efficiency approaches and it doesn't cost much (I think, though perhaps I'm wrong) to be gracious to those we think are doing some good but not as much as they could be.

Thanks for the reply, Nathan.

I think EA shouldn't want inefficient charities to end simply because it has no ability to actually make this happen. There will always be people who donate with pathos before logos, and this is something that I think EA could be better at knowing how to harness to its advantage.

Yeah I think what I'm advocating for in this post is that you might be able to do more good per dollar by shifting graph to right (if this is in fact possible) because the graph is not actually a nice even bell curve but heavily bunched in the middle, if not to the left.

I'd like someone to research and plot the graph fully and do some tests. Let's see, I guess.

"I wonder if there could be a kind of "trip advisor" type badge to recommend how well charities/interventions are doing in such a way as to encourage them to improve."

Not quite the same, but you might be interested in https://sogive.org/

Even addressed problems can be addressed inefficiently

This is a good generalization to make from the climate change post last month. I argued in a comment that while climate action is well-funded as a category, I knew of a specific intervention that seems important, easily tractable & neglected. We can probably find similar niches in other well-funded areas.

I was increasingly seeing a movement in Foundation World towards better frameworks around understanding and reporting on net impact. While EA takes this idea to an extreme I didn't understand why this community needed to be so removed from the conversations (and access to capital) that were simultaneously happening in other parts of the social sector.

I suppose EA grew from a different group of people with a different mindset than traditional charities, so I wouldn't expect connections to exist between EA and other nonprofits (assuming this is a synonym for "the social sector") until people step forward to create connections. Might we need liasons focused full time on bridging this gap?

At the beginning of the curve, down and to the left, we see that there is a smaller amount of capital circulating through approaches that aren't that effective.

On the far left, interventions can have negative value.

Great post! I've had many thoughts in the same space for a while now.

With how hard a time many EA's have had finding direct EA work your post is particularly relevant. Direct EA work is the only place I've ever encountered where an (made up example) Oxford graduate who worked at Goldman Sachs is not met with immediate lucrative offers. It should be abundantly clear that looking elsewhere is a solid option. A charity like red cross moves around an obscene amount of money. I don't think it's unreasonable to suggest someone could have an extremely large impact by attaining influence and power in such an organisation, applying EA principles to make the organisation more effective. Exposing effective charities to the masses through heavy patos advertising is another way to increase total impact.

There are many great ways to have a large impact in places that aren't directly related to EA. Take advantage of the unique opportunities you are presented, and evaluate the options you haven't heard others discuss. With this much talent fighting so hard for the same few roles, the total increase in impact by having one more person spending their time applying, compared to going from zero to one EA aligned person in an important role elsewhere doesn't seem great to me.

Curated and popular this week
Relevant opportunities