I’ve written a series of posts where I discuss remuneration/compensation and demandingness in effective altruism. Here I briefly summarise that series and explain how the different posts fit together.

My key claim is that remuneration in effective altruism should, on average, be substantial. That’s mainly because of incentive effects, which I expect to outweigh the financial costs (partly thanks to the improved funding situation in effective altruism). In part, my series can be seen as a response to the recent articles on the EA Forum that express worries about greater effective altruist spending and increased remuneration.

I think effective altruism has already been moving in the direction of higher average remuneration for the last couple of years, and I suggest that development should continue. To an extent, I think it represents a convergence with standards and norms on the regular labour market. Effective altruists are sensitive to monetary incentives just like other people—maybe more so than is sometimes acknowledged.

I discuss several counter-arguments; i.e. arguments for lower remuneration. I am relatively critical of the argument that effective altruists should use willingness to work for low remuneration as a costly signal of value-alignment. By contrast, I’m a bit more ambivalent about reputational arguments.

Besides substantial average remuneration I also argue for remuneration variance. Specifically, I argue that effective altruist funders should use monetary incentives to encourage people to take particularly impactful jobs. Given the likely large differences in impact between jobs, I expect that to be worth it.

I also discuss more general and conceptual issues in several posts. I argue that just as effective altruists should be neutral between different causes, so they should be neutral between the use of different resources (e.g. time vs money), as well as between different mindsets (e.g. a frugality mindset vs other mindsets). It seems to me that effective altruists aren’t always neutral in these senses, but that we sometimes cling on a bit nostalgically to the mindset and the approaches that the movement had at the start. Instead, I think we should be as open to changing our minds on these issues as we are regarding cause selection. I also show that to achieve resource neutrality, we can conceptualise use of our time in terms of its potential monetary value.

These are just some broad qualitative sketches, similar in style to the posts I respond to. I try to define the conceptual landscape and share some intuitions. Obviously this is not nearly as strong evidence as hard data would be. I think it could be valuable if some effective altruist researchers—e.g. researchers with an economics training—studied these issues in more detail. They could collect data on effective altruist remuneration and try to estimate effects of different remuneration levels on impact. Since effective altruism is growing, it seems increasingly plausible to have researchers dedicated to studying such issues. It also seems to me that effective altruists have devoted far less time to these issues than to cause prioritisation. Just as we do data-driven research on the cost-effectiveness of different causes, so we should do data-driven research on the cost-effectiveness of different remuneration levels. Such research could potentially also help make discussions about remuneration levels less emotional and more detachedly focused on impact.

The posts in chronological order:

Resource neutrality and levels vs kinds of demandingness

Monetising time-donations

An argument against costly signalling in effective altruism

Deliberate altruism and costly signalling

The productivity benefits of substantial compensation in effective altruism

An analysis of reputational arguments for sacrificial behaviour in effective altruism

Mindset neutrality

The productivity benefits of compensation variance in effective altruism

Thanks to Pablo Stafforini, George Rosenfeld, and especially Ryan Carey and Daniel Eth for very helpful comments on this series of posts.

Comments11
Sorted by Click to highlight new comments since:

Thanks for writing this.

I feel much more important and valued at an EA position if I'm paid a high salary, and sad when I'm not, because the difference only converts to 1-2% of my impact. So I'm glad to see someone write about the framing that I'm essentially this should be treated as donating 98-99%. Being underpaid would be especially annoying to the extent it prevents beneficial time-money tradeoffs.

Note I currently feel adequately paid.

I agree that there are some psychological and practical benefits to making and having more money, but I don't think you're "essentially donating 98–99%," since even if you create value 50–100 times your salary, there's no way for you to capture 50–100 times your salary, even if you were totally selfish. The fraction you're "essentially donating" is more like , where "max possible salary" is the amount you would earn if you were totally selfish.

I agree that this is more accurate.

I think what I was going for is: say someone is day trading and making tens of millions of $/year. It would be pretty unreasonable to expect them to donate 98%, especially because time-money tradeoffs mean they can probably donate more if donating only 90%.

This is not necessarily equivalent to a situation where someone is producing tens of millions of research value per year, but it's similar in a few respects:

  • Keeping all the value for themself isn't on the table for an altruist
  • Barring optics, taxes, etc. the impact calculation is similar
  • Pay provides incentives and a signal of value in both cases
  • Deviating from the optimum is deadweight loss

I don't think salary norms in these circumstances should be identical, but there's a sense in which having completely unrelated salary norms for each case bothers me. It's a wrong price signal, like a $1000 bottle of wine that tastes identical to $20 wine, or a Soviet supermarket filled with empty shelves due to price controls.

By  I'm probably "essentially donating" only around 94%, though it does get closer to 99% if you count equity from possible startups.

Stefan, your most important argument seems to be that higher salaries will help with recruitment and motivation. But you don't address the concern there's something a bit puzzling about the most competent effective altruists being motivated by making money for themselves.

If someone says "look, I'll do the work, and I will be excellent, but you have to pay me $150k a year or I walk" I would doubt that were that serious about helping other people. They'd sound more like your classic corporate lawyer than an effective effective altruist.

If someone says "look, I'll do the work, and I will be excellent, but you have to pay me $150k a year or I walk" I would doubt that were that serious about helping other people. They'd sound more like your classic corporate lawyer than an effective effective altruist.

Adding to my other comment, there are several reasons I might choose a different job if I were paid <<150k, even as someone who is basically dedicated to maximizing my impact.

  • My bargain with the EA machine lets selfish parts of me enjoy a comfortable lifestyle in exchange for giving EA work my all.
  • Salaries between EA orgs should be a signal of value in order to align incentives. If EA org A is paying less than org B, but I add more value at org A, this is a wrong incentive that could be fixed at little cost.
  • There are time-money tradeoffs like nice apartments and meal delivery that make my productivity substantially higher with more money.
  • Having financial security is really good for my mental health and ability to take risks; in the extreme case, poverty mindset is a huge hit to both.
  • Underpaying people might be a bad omen. The organization might be confusing sacrifice with impact, be constrained by external optics, unable to make trades between other resources, or might have trouble getting funding because large funders don't think they're promising.
  • Being paid, say, 15% of what I could probably make in industry just feels insulting. This is not an ideal situation, but pay is tied up with status in our society, especially taking pay cuts.
  • An organization that cuts my pay might be exhibiting distrust and expecting me to spend the money poorly; this is also negative signal.

I'm not quite sure I understand the argument. One interpretation of it is that higher salaries don't actually have positive incentive effects on recruitment, motivation, etc. It would be good to have more data on that, but my sense is that they do have an effect. With respect to this argument, one needs to consider how high salaries are as a fraction of the monetised impact of the jobs in question. If that fraction is low, as Thomas Kwa suggests, then it could be worth increasing salaries substantially even if the effect on impact (in terms of percentages) is relatively modest.

Another interpretation is that one needs to pay low salaries to filter for value-alignment. I discussed that argument critically in two of my posts.

Hmm, I don't think you've engaged with my point: there's something odd about very altruistically capable people requiring very high salaries, lest they choose to go and do non-impactful jobs instead. The charity section famously has lower salaries because the work is more intrinsically rewarding than regular corporate fare.

The salaries might have an effect, but I don't think you've shown that in this case - the linked tweet is anecdata. A possibility is that higher salaries in one EA org just pull the better candidates to that org. So I want evidence showing it is pulling in 'new' candidates.

I'm not sure about the fraction of monetised impact bit of that relevant. As someone who runs an org, I only have access to my budget, not the monetised impact - a job might have '£1m a year of impact', but that's um, more than 4x HLI's budget. For someone with enormous resources, eg Open Phil, it might make more sense to think like this.

Of course, it might be that we just have different meanings of 'high' and I would have welcomed if you'd offered a operationalisation in your discussion. I'm not sure I disagree with your conclusion, I just don't think you've proved your case.

The charity section famously has lower salaries because the work is more intrinsically rewarding than regular corporate fare.

I thought it was because there's no profit to be made doing the work.

I was trying to understand your argument, and suggested two potential interpretations.

there's something odd about very altruistically capable people requiring very high salaries, lest they choose to go and do non-impactful jobs instead

Sound more like your classic corporate lawyer than an effective effective altruist

I don't understand where you're trying to go with these sorts of claims. I'm saying that I believe that compensation helps recruitment and similar, and therefore increase impact; and that I don't think that higher compensation harms value-alignment to the extent that's often claimed. How do the quoted claims relate to those arguments? And if you are trying to make some other argument, how does it influence impact?

I'm not sure about the fraction of monetised impact bit of that relevant. As someone who runs an org, I only have access to my budget, not the monetised impact - a job might have '£1m a year of impact', but that's um, more than 4x HLI's budget. For someone with enormous resources, eg Open Phil, it might make more sense to think like this.

It's relevant because orgs' budgets aren't fixed. Funders should take the kind of reasoning I outline here into account when they decide how much to fund an org.

I've been very clear that I don't have non-anecdotal evidence, and called for more research in my post.

I liked this series, and agree with a lot of it. But (unless I missed this) I think you omitted an important problem of using low salaries as a proxy for value alignment: it is a much more meaningful proxy for some people than others. Low salaries might filter out people who aren’t value aligned, but they will also filter out people who are very value aligned but can’t accept low salaries because of e.g. high medical bills, having dependents to support, student loans, etc. This interferes with the goal of finding the best candidates, and exacerbates EA’s tendency toward elitism.
 

Thanks, good point. I agree that this is an additional problem for that strategy. My discussion about it wasn't very systematic.

Curated and popular this week
Relevant opportunities