PabloAMC 🔸

Quantum algorithm scientist @ Xanadu.ai
1121 karmaJoined Working (6-15 years)Madrid, España

Bio

Participation
5

Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)

Comments
143

My donation strategy:

It seems that we have some great donation opportunities in at least some cases such as AI Safety. This has made me wonder what donation strategies I prefer. Here are some thoughts, also influenced by Zvi Mowshowitz's:

  1. Attracting non-EA funding to EA causes: I prefer donating to opportunities that may bring external or non-EA funding to some causes that EA may deem relevant.
  2. Expanding EA funding and widening career paths: Similarly, if possible fund opportunities that could increase the funds or skills available to the community in the future. For this reason, I feel highly supportive of Ambitious Impact project to create onramps for careers with impact in earning to give, for instance. This is in contrast to incubating new charities (Charity Entrepreneurship), which is slightly harder to motivate unless you have strong reasons to believe your impact is more cost-effective than typical charities. I am a bit wary that uncertainty might be too large to clearly distinguish charities in the frontier.
  3. Fill in the gap left by others: Aim to fund charities that are medium-sized between their 2nd to 5th years of life: they are not small and young enough that they can rely on Charity Entrepreneurship seed funding. But they are also not large enough to get funding from large funders. One could similarly argue that you should fund causes that non-EAs are less likely to fund (e.g. animal welfare), though I find this argument more strongly if non-EA funding was close to fully funding those other causes (e.g. global health) or if the full support of the former (animal welfare) fully depends on the EA community.
  4. Value stability for people running charities: By default and unless there are clearly better opportunities, keep donating to the same charities as previously done, and do so with unrestricted funds. This allows some stability for charities, which is very much welcomed for the charities. Also, do not push too hard on the marginal cost-effectiveness of donations, because that creates some poor incentives.
  5. Favour hits-based strategies and local-knowledge: Favour hits-based strategies particularly those in which you profit from local knowledge of opportunities that may not be visible to others in the community.

One example of a charity I will support is ARMoR which fits well with points 1 and 3. I am also excited about local knowledge opportunities in the AI Safety ecosystem. Otherwise, I am also particularly optimistic about the work of Apollo Research on evaluations and Redwood Research on AI control; as I believe those to be particular enablers of more robust AI governance.

I agree with most except perhaps the framing of the following paragraph.

Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?

In my opinion the key difference is that here the bad outcome (eg animal suffering but any other, really), may happen because of decisions taken by the people you are saving. So, in a sense it is not an externally imposed mechanism. The key insight to me is that the children always have the chance to prevent the suffering that follows, people can reason and become convinced, as I was, that this suffering is important and should be prevented. Consequently, I feel strongly against letting innocent people die in these situations. So overall I do not think this has to do with speciesism or cause prioritisation.

Incidentally, this repeats with many cultural themes in films and books, that people can change their minds, and that they should be given the chance to. Similarly, it is a common theme that you should not kill innocent people to prevent some bad thing from happening (think Thanos and overpopulation, Herod convicting Jesus to die to prevent greater wrongdoings…). Clearly these are not strong ethical arguments, but I think they contain a grain of truth; and one should probably have a very strong bias against (taboo level) endorsing (not discussing) conclusions that justifies letting innocent people die.

For what is worth, I like the work of Good Food Institute on pushing the science and market of alternative proteins. They also do some policy work though I fear their lobbying might have orders of magnitude less strength than the industry’s.

Also, as far as I know the Shrimp Welfare Initiative is directly buying and giving away the stunners (hopefully to create some standard practice around it). So counterfactually it seems a reasonable bet for the direct impact at least.

But I resonate with the broad concerns with corporate outreach and advocacy. I am particularly wary of bad cop strategies. While I feel they may work, I easily see how companies could set up some public advertising campaign about how their work is good for farmers and the community. I see them doing it all the time, and they are way better financed than charities.

Hey Vasco, on a constructive intention, let me explain how I believe I can be a utilitarian, maybe hedonistic to some degree, value animals highly and still not justify letting innocent children die, which I take as a sign of the limitations of consequentialism. Basically, you can stop consequence flows (or discount them very significantly) whenever they go through other people's choices. People are free to make their own decisions. I am not sure if there is a name for this moral theory, but it would be roughly what I subscribe to.

I do not think this is an ideal solution to the moral problem, but I think it is much better than advocating to let innocent children die because of what they may end up doing.

I donated the majority of my yearly donations to a campaign for AMF I did through Ayuda Efectiva for my wedding. The goal was to promote effective donations in my family and friends. I also donated a small amount to the EA Forum election because I think it is good for democratic reasons to allow the community to decide where to allocate some funds.

Hi @Jbentham,

Thanks for the answer. See https://forum.effectivealtruism.org/posts/K8GJWQDZ9xYBbypD4/pabloamc-s-quick-takes?commentId=XCtGWDyNANvHDMbPj for some of the points. Specifically, the problem I have with the post is not about cause prioritization or cost-effectiveness.

Arguing that people should not donate to global health doesn't even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it's perfectly permissible to let hundreds or thousands of children die of preventable diseases.

I think I disagree with this. Instead, I think most people find it hard to do what they believe because of social norms. But I think it would be hard to find a significant percentage of people who believe that "letting innocent children die because of what they could do".

Utilitarians and other consequentialists are the ones who hold "weird" views here, because we reject the act/omission distinction in the first place.

Probably you are somewhat right here, but I believe "letting innocent children die" is even a weirder opinion to have.

Hi there,

Let me try to explain myself a bit.

For example, global health advocates could similarly argue that EA pits direct cash transfers against interventions like anti-malaria bednets, which is divisive and counterproductive, and that EA forum posts doing this will create a negative impression of EA on reporters and potential 10% pledgers.

There is a difference between what the post does and what you mention. The post is not saying that you should prioritize animal welfare vs global health (which I would find quite reasonable and totally acceptable). I would find that useful and constructive. Instead, the post claims you should simply not donate the money if considering antimalarial nets. Or in other words, that you should let children die because of the chicken they may have eaten.

Also, traditionally, criticism of "ends justifies the means" reasoning tends to object to arguments which encourage us to actively break deontological rules (like laws) to pursue some aggregate increase in utility, rather than arguments to prioritise one approach to improving utility over the other (which causes harm by omission rather than active harm), eg - prioritising animal welfare over global health, or vice-versa.

In fact, the deontological rule he is breaking seems clear to me: that innocent children should die because their statistical reference class says they will do something bad. And yes, they are still innocent. To me, any moral theory that dictates that innocent children should die is probably breaking apart at that point. Instead he bites the bullet and assumes that the means (preventing suffering) justifies the ends (letting innocent children die). I am sorry to say that I find that morally repugnant.

Also, let me say: I have no issue with discussing the implications of a given moral theory, even if they look terrible. But I think this should be a means to test and set limits to your moral theory, not a way to justify this sort of opinion. Let me reemphasize that my quarrel has nothing to do with cause prioritization or cost-effectiveness. Instead, I have a strong sense that innocent children should not be let die. If my moral theory disagrees with the strong ethical sense, it is the strong ethical sense that should guide the moral theory, and not the other way around.

As I commented above, it would not make any sense for someone caring about animals to kill people.

You only did so on the ground of not being an effective method, and because it would decrease support for animal welfare. Presumably, if you could press a button to kill many people without anyone attributing it to the animal welfare movement you would, then?

Thanks MHR!

This is informative, I strongly upvoted. A few comments though:

  1. I find it ok to entertain the idea of what is the expected value of doing X or Y as a function of their consequences, be it longtermism or animal welfare.

  2. I would find it very morally unappealing to refuse to save lives on the grounds of convicting people of actions they have not committed yet. Eg, if a child is drowning before you, I think it would be wrong in my opinion to let her drown because he might cause animal suffering. A person can make decisions and I would find it wrong to let her die because of what her statistical group does.

Load more