Y

Yellow

41 karmaJoined

Posts
1

Sorted by New

Comments
8

I would agree that impact calculations are only improved by considering concepts like counterfactual / additional marginal and so on.

I would caveat that when you're not doing calculations based on data, but rather more informal reasoning, it's important not to overweight this idea and assume that less competitive positions are probably better - it might easily be that the role with higher impact when your calculation is naïve to margins and counterfactuals, remains the role with higher impact after adding those calculations in, even if it is indeed the more competitive role. 

I think for most people when it comes to big cause area differences like CEA vs AMF, what they think about the big picture regarding cause areas will likely dominate their considerations. Your estimate would have to fall within a very specific range before adjustments for counterfactual additionality on the margin would be a consideration that tips the scale, wouldn't it?

Answer by Yellow3
1
1

The funny thing is, (I don't have any inside info here, this is all pure speculation) I wouldn't be shocked if the AMF position ended up being more competitive of the two, despite the lower salary.

Normally non-profit salaries tend to be lower than for-profit salaries even when skills are equivalent because people are altruistic and therefore willing to work for less if the work is prosocial and meaningful and fits their interests. For example, the position of a professor is more competitive and has a higher skill requirement than industrial R&D jobs in the same field, but the latter is more compensated.

I believe that in the EA community this effect is much more pronounced. Some (not all) EA orgs are likely in a somewhat unique position where even if you offer 28-35k and less, you may still be getting applications from people with degrees from top schools who could be making double or triple or quadruple that on the open market, and at some point when you notice that your top applicants are mostly motivated by impact and not money you might become uncertain as to whether or not offering more money actually further improves the applicant pool.

In such an environment,  salaries aren't exactly set by market forces in the same way as a normal job. Instead, they are set by organizational culture and decision making. This is likely all the more true for remote roles, where lack of geography constraints makes expected pay among equally skilled candidates even more variable.

Some people see the situation and think "so let's spend less money and be more cost effective, leaving more money for the beneficiaries and the impact and attracting more donations from our reduced overhead". They aren't in it for money, and they figure all the truly committed candidates who make the best hires aren't either.

Other people see the situation and and think "nevertheless, let's pay people competitive rates, even if we could get away with less" either out of a sense of fair play (e.g. let's not underpay people doing such important work just because there are some people who are altruistic enough to accept that, that's not nice) or the golden rule (e.g. they themselves want a competitive salary and would feel weird offering an uncompetitive one) or or because they figure they will get better candidates or more long term career sustainability and personnel retention that way. 

One of these perspectives is probably the correct one for a given use-case, but I'm not sure which one and reasonable people seem to diverge.

(Of course it's not just personal preference - some organizations, and some positions inside organizations, have more star-power than others and so have this option more. And it's also a cause area thing - some cause areas perceive themselves as more funding-bottlenecked where every $2 in salary is one less mosquito net, while others aren't implicitly making that comparison as much, pouring extra money into their project wouldn't necessarily improve their project and the true bottleneck is something else.
 

Even if one believes they can make more impact at AMF, they would have to give up 20k pounds in salary to pass on the content specialist role. We learned recently to consider earning less, but this may still be quite the conundrum. What do you think?

As far as the personal conundrum goes, I guess you have to ask yourself how much you value earning more, and consider if you'd be willing to pay the difference to buy the greater impact that you'd achieve by taking the position you believe is higher impact to take.

This is a conflation of technical criticism (e.g. you critique a methodology or offer scientific evidence to the contrary) and office politics criticism (e.g. you point out a conflict of interest or question a power dynamic)

Plant made a technical criticism, whereas office politics disagreement is the one that potentially carries social repercussions.

Besides, ea orgs aren't the only party that matters- the media reads this forum too, i can see how someone might not want a workplace conflict to become their top Google result.

Answer by Yellow4
3
0

How should we navigate this divide?

I generally think we should almost always prioritize honesty where honesty and tact genuinely trade off against each other. That said, I suspect the cases where the trade-off is genuine (as opposed to people using tact as a bad justification for a lack of honesty, or honesty as a bad justification for a lack of tact) are not that common.

Do you disagree with this framing? For example, do you think that the core divide is something else?

I think that a divide exists, but I disagree that it pertains to recent events. Is it possible that you're doing a typical mind fallacy thing where just because you don't find something to be very objectionable, you're assuming others probably don't find it very objectionable and are only objecting for social signaling reasons? Are you underestimating the degree to which people genuinely agree with what you'd framing as the socially acceptable consensus views. rather than only agreeing with said socially acceptable consensus views due to social capital? 

To be clear, I think there is always a degree to which some people are just doing things for social reasons, and that applies no less to recent events than it does to everywhere else. But I don't think recent events are particularly more illustrative of these camps. 

it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp.

I think this is false. If you look at every instance of an organization seemingly failing at full transparency for optics reasons, you won't find much in the way of trend towards global health organizations. 

On the other hand, if you look at more positive instances (people who advocate concern for branding, marketing, and PR with transparent and good intentions),  you still don't see any particular trend towards global health. (Some examples: [1]][2][3] just random stuff pulled out by doing a keyword search for words like "media", "marketing" etc). Alternatively you could consider the cause area leanings of most"ea meta/outreach" type orgs in general, w.r.t. which cause area puts their energy where.

It's possible that people who prioritize global poverty are more strongly opposed systematic injustices such as racism, in the same sense that people who prioritize animal welfare are more likely to be vegan. It does seem natural, doesn't it, that the type of person who is sufficiently motivated by that to make a career out of it, might also be more strongly motivated to be against racism? But that, again, is not a case of "prioritizing social capital over epistemics", any more than an animal activist's veganism is mere virtue-signaling.  It's a case of genuine difference in worldviews. 

Basically, I think you've only arrived at this conclusion that global health people are more concerned with social capital because you implicitly have the framing that being against the racist-sounding stuff specifically is a bid for social capital, while ignoring the big picture outside of that one specific area. 

Also I think that if you think people are wrong about that stuff, and you'd like them to change their mind, you have to convince them of your viewpoint, rather than deciding that they only hold their viewpoint because they're seeking social capital rather than holding it for honest reasons.

I think the central point is that animals carry moral weight and that we should act accordingly, not that there are no trade-offs to to the health and pleasure of humans from abstaining from using animal products. It's not as if, given a scientific consensus that the optimal diet at our current tech level includes meat, animal advocates would cease advocating for abstaining from using animal products. Assigning animals a significant moral weight means that such very minor drawbacks to humans become a rounding error next to the major harms to animals.

Animal advocates who say that cutting out meat will not harm your health or will improve it, aren't presenting an unbiased argument about nutrition literature and human health. The conclusion is motivated by not wanting to hurt animals. Research that validates or debunks this motivated conclusion may be useful to animal advocates insofar as which vitamins and protein powders they might recommend, but it wouldn't sway the central point.

I consider myself a current leftist, and I honestly don't have a big "leftist critique of ea". Effective altruism seems uncomplicatedly good according to all the ideas I have that I consider "leftist", and leftism similarly seems good according to all the ideas that I consider EA.

Effective altruists as individuals aren't always radical leftist of course, though they are pretty much all left of center. If you press me to come up with criticisms of EA, I can think of harmful statements or actions made by high profile individuals to critique, I guess, though idk if that would be useful to anyone involved.   I can also say that the community as a whole doesn't particularly escape the structural problems and interpersonal prejudices found in larger society - but it's certainly not any worse than larger society. Also EA organizations, are not totally immune to power and corruption and internal politics and things like that, these things could be pointed out too.  What I am saying is, effective altruists and institutions aren't immune from things like racism and sexism and stuff like that. But that's true of most people and organizations, including leftist ones. But there's nothing that un-leftist about effective altruism, the ideology. 

If the whole idea is that you're impartially treating everyone equally and doing the most you can to help them then that's... almost tautologically and by definition, good, from almost all reasonable political perspectives, leftist or otherwise? I think you really gotta make some stronger and more specific claims which touch upon a leftist angle, if you want someone to refute them from a leftist angle.

Answer by Yellow16
0
0

I would highly recommend anyone who is interested in funding small things keep tabs on Charity Entrepreneurship as well as every charity they incubate.

Hope you guys enjoy  my little doodle! This is my submission to the creative writing contest. I know it's pretty rough draft-ish and last-minute. If people seem to like the premise , I will polish it up more!