B

Berke

107 karmaJoined

Comments
11

For the issues you raised in the last section, you may find this paper by Mogensen & Macaskill valuable. From the abstract: "Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives."

Agree, besides being further away, this would most probably reduce the number of EAs from LMICs who go to  EA conferences. I'm from Turkey and the limited number of people from Turkey who have gone to an EAGx did so because there was travel funding(including my first two conferences) and I'm quite confident none of them would be able to go if there was no funding(because I personally know them). I was thinking that 6 people from our college group would come to the next EAGx in Europe, but if there is no travel funding no one besides me most probably  won't go(and I would be able to go because I'm on an EA fellowship!) 

Still, I'm not saying all EAs from LMICs should be reimbursed or it makes sense to fund people who wouldn't otherwise be able to come to conferences(if they didn't receive funding) but i) on the margin providing travels grants to people from countries with low EA presence may have higher bang for the buck ii)A very selective travel grants policy would have this consequence(effectively reducing a considerable number of EAs based in LMICs from participating in EAGs)

 EA career advice tailored for people in based in LMICs was urgently  needed, very glad to see this! 

People in countries with low-EA presence can be very well-positioned to have a lot of impact even in the very short-run, as the number of low hanging fruits(really neglected high-impact opportunities where even a single person can plausibly make a substantial difference) in most of the LMICs are considerably higher compared to Western European and American countries, this post will probably empower a lot of people have more impact, thank you for writing this great post!

De-emphasizing cause neutrality could(my guess is) probably would reduce the long-term impact of the movement substantially. Trying to answer the question "How do the most good", without attempting to be neutral between causes we are passionate about and causes we don't (intuitively) care that much about would bias us towards causes and paths that are interesting to us rather than particularly impactful causes. Personal fit and being passionate about what you do is absolutely important, but when we're trying to compare causes and comparing actions/careers in terms of impact(or ITN), our answer shouldn't be dependent on our personal interests and passions, but when we're taking action based on those answers then we should think about personal fit and passions, as these prevent us from being miserable while we're pursuing impact. And also, cause neutrality should nudge people against associating EA with a singular cause like AI Safety or global development or even 80k careers, I think extreme cause neutrality is a solution to the problem you describe, rather than being root of the problem.
De-emphasizing cause neutrality would increase the likelihood of EA becoming mainstream and popular, but it would also undermine our focus and emphasis on impartiality and good epistemics, which were/are vital factors why EA was able to identify so many high-impact problems and take action to tackle those problems effectively imho.

An absolutely terrific post, thank you very much for writing this! 

Even when you are trying advance equity, there will be certain charities that are more cost-effective and "efficient", efficient in the sense that they'll be successful. Again, if you want to do human rights lobbying, probably doing that in the US would probably be more expensive compared to a relatively globally irrevelant low-income country x where there isn't much lobbying. Cost-effectiveness isn't the endpoint of EA, it's a method that enables you to choosse the best intervention when you have scarce money.

For billionaire philanthropy, there are a lot of moral theories that don't assume what you assume about democracy or assume billionaires shouldn't make decisions about public good.Most consequentalists doesn't assume automatically or a priori that billionaires should be less powerful, their stance on this would be based on more empirical truths but still, this part of your post also has a moral assumption involved in it. Libertarian-ish moral views, prioritarianism, utilitarianism and not a theory but a view called high-stakes instrumentalism  are all views that are quite popular and we should integrate into our normative uncertainty model.  You can check this blogpost on why some people aren't  against billionaire philanthropy. I personally wouldn't want state or masses to prevent people from spending their money as they'd like, many people from countries that are experiencing democratic backsliding or have low trust in government too wouldn't agree with you. In Turkey, it's really hard to have abortions outside of private hospitals for instance, universal healthcare for the globally disadvantaged people means growing a state that's usually corrupt and anti-liberal. I'm not saying this is defintiely wrong, we should be less confident of our views when we're talking about this issue.
 

Aiming higher in our altruistic goals doesn't alleviate the requirement of having a theory of change and noticing the skulls, there are many organizations trying to do what you want to do, advance equity, but world and a lot of places these specific charities operate are still quite unequal, they aren't very successful, vaccines still have patents, what will you do differently this time?

Also I think a probabilistic standpoint is useful, like for instance when equity and health outcomes tradeoff, let's imagine a parallel universe when universal healthcare will result in slightly worse outcomes and slightly worse wellbeing overall, both for the average and the well-off person. But, it will be equal, variation of health outcomes between wealthy people and poor people will decrease, even though poor people's health outcomes won't improve and this will take palce because of wealthy people's loss of welfare. Do you think still, effective altruists should advance equity? This is a very specific conceptualization of good. I'm not saying equity is unimportant, other things may be important too, that's why taking normative uncertainty and empirical uncertainty is really important when we're talking about these issues.

Cost-effectiveness doesn't mean only efficiency. I think when you're trying to do the most good, ditching the use of cost-effectiveness is quite hard because what will you use instead? Cost-effectiveness isn't only about efficiency or consequentalist perspectives, it's about doing the most good possible the scarce money we have(as EAs). Don't you think it'd be better to think about how cost effective human-rights lobbying is or will be before taking these actions? When you're trying to decide on which programs to fund from a Rawlsian framework, what will you use if you won't cost-effectiveness? If two programs achieve the same thing, and one of them costs 10k and the other costs 25k, you shouldn't donate money to the latter program.

Also, saving African children from Malaria by distributing bednets, vaccinating Nigerian kids with certain incentives or preventing humanity from destroying itself is not valuable only from a utilitarian point of view, the number of moral views that somehow imply "No you shouldn't save an African kid for 4.5k, just buy a better car" isn't probably high. But on the other hand "Billionaire philanthropy isn't okay, it'd be better if the masses decided what to do" and "Universal healthcare is a moral imperative" are claims which a lot of moral theories would disagree with.  So if you accept that it's quite possible for us to be mistaken about which moral theory is correct, the case for changing global discourse and setting up effective bureucracies that would be able to provide high-quality universal healthcare would quite hard.
 

A third critique is tractability. Isn't it quite hard to change global political discourse, especially in Africa where most EAs do not have no connection to, and institute health as a global right and actually enforce this? This seems quite unlikely, because this would require increasing state capacity all over the global south, advancement of technologies in underdeveloped countries(if we take veil of ignorance seriously), setting effective and capable health bureucracies in countries where bureucracies tend to be home clientelistic and kleptocratic tendencies rather than effectiveness. Again, I don't think the goal this post propose are actually tractable. This is different distributing bednets.

What are we actually optimizing for? Are we optimizing for improving health outcomes of the most disadvantaged people? If I was behind a veil of ignorance, I'd like to have a more functional FDA and overall better medical innovation, when there are tradeoffs between medical innovation and extending universal healthcare, what should we do? How would we understand if we're making progress on these goals? Number of states claiming that healthcare is a human right? I live in Turkey where healthcare is universally provisioned by the state, but can't get an appointment before 3 months on most hospital, and quality of healthcare at state hospitals are quite low.(at a level where 4 doctors misdiagnosed me, they all diagnosed with different diseases, failing spectacularly)
 
 

If you're curious about  arguments for (or rather against against) why billionaire philanthropy you might check out this SSC post. This post by  Richard Chappell is pretty good too.
 

Load more