I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
My substack: https://arielsimnegar.substack.com/
So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF.
I've run into a similar dilemma before, where I'm trying to convince non-EAs to direct some of their charity to AMF rather than their favorite local charity. I believe animal welfare charities are orders of magnitude more cost-effective than AMF, so it's probably higher EV to try to convince them to direct that charity to e.g. THL rather than AMF. But that request is much less likely to succeed, and could also alienate them (because animal welfare is "weird") from making more effective donations in the future. Curious about your thoughts about the best way to approach that.
Thanks for your justification! Hamish McDoodles also believed that neuron count weighting would make the best human welfare charities better than the best animal welfare charities. However, after doing a BOTEC of cage-free campaign cost-effectiveness using neuron counts as a proxy, he eventually ended up changing his mind:
ok, animal charities still come out an order of magnitude ahead of human charities given the cage-free campaigns analysis and neuron counts
So unless you have further disagreements with his analysis, using neuron count weighting would probably mean you should support allocating the 100M to animal welfare rather than global health.
Tangentially, this conversation illustrates how (if person-affecting views are false), the sign of Family Empowerment Media (FEM) is the opposite of AMF and other life-saving charities. FEM prevents human lives and AMF saves lives, and they have the opposite downstream effects on human lived experience, farmed animal welfare, and so on.
Therefore, I would not suggest anyone split their donations between life-preventing charities like FEM and lifesaving charities like AMF, because their effects will offset each other. People who are sympathetic to FEM (as opposed to AMF) because of farmed animal effects should probably just donate to animal welfare charities which I would expect to help animals even more.
Your writings on this subject often emphasize an extremely high regard for the value of people making their own reproductive decisions, even when the weights are (as in this case) a human's life and an enormous amount of farmed animal suffering.
When would the other stakes be sufficiently large for you to endorse preventing someone from making their own reproductive decision?
For example, let's say Hitler's mother could have been forced to have an abortion, preventing Hitler's birth. Would you say that's a tradeoff worth making, with regret?
Or let's say we know Alice's son Bob, were he to be born, will save 1 billion lives by preventing a nuclear war, and Alice currently intends to abort Bob. Would you say forcing Alice to carry Bob to term would be a tradeoff worth making, with regret about the forced birth?
The reason why I ask is because my intuition is that while reproductive autonomy is very important, it seems to me that there are always ways to up the stakes such that it can be the right thing to compromise on that principle, with regrets. I feel like there's something I'm missing in my understanding of your view which has caused us historically to talk past each other.
Brian Tomasik has argued that if (a) wild animals have negative welfare on net, and (b) humans reduce wild animal populations, then that may swamp even the horrific scale of factory farming.
I personally think the meat eater problem is very serious, and the best way around it is to just donate to effective animal welfare charities! Those donations would be orders of magnitude more cost-effective than the best human-centered alternatives.
I think some critiques of GVF/OP in this comments section could have been made more warmly and charitably.
The main funder of a movement's largest charitable foundation is spending hours seriously engaging with community members' critiques of this strategic update. For most movements, no such conversation would occur at all.
Some critics in the comments are practicing rationalist discussion norms (high decoupling & reasoning transparency) and wish OP's communications were more like that too. However, it seems there's a lot we don't know about what caused GFV/OP leadership to make this update. Dustin seems very concerned about GFV/OP's attack surface and conserving the bandwidth of their non-monetary resources. He's written at length about how he doesn't endorse rationalist-level decoupling as a rule of discourse. Given all of this, it's understandable that from Dustin's perspective, he has good reasons for not being as legible as he could be. Dishonest outside actors could quote statements or frame actions far more uncharitably than anything we'd see on the EA Forum.
Dustin is doing the best he can to balance between explaining his reasoning and adhering to legibility constraints we don't know about in order to engage with the rest of the community. We should be grateful for that.
Thanks for the post, Vasco!
From reading your post, your main claim seems to be: The expected value of the long-term future is similar whether it's controlled by humans, unaligned AGI, or another Earth-originating intelligent species.
If that's a correct understanding, I'd be interested in a more vigorous justification of that claim. Some counterarguments:
You also dispute that we're living in a time of perils, though that doesn't seem so cruxy, since your main claim above should be enough for your argument to go through either way. Still, your justification is that "I should be a priori very sceptical about claims that the expected value of the future will be significantly determined over the next few decades". There's a lot of literature (The Precipice, The Most Important Century, etc) which argues that we have enough evidence of this century's uniqueness to overcome this prior. I'd be curious about your take on that.
(Separately, I think you had more to write after the sentence "Their conclusions seem to mostly follow from:" in your post's final section?)
To your first point, it seems that animal welfare interventions which fix population size, like humane slaughter, would be orders of magnitude better than global health interventions, even if the animals live net good lives. For another example, the Fish Welfare Initiative's interventions to improve fish lives may increase the number of farmed fish due to increasing capacity for stocking density, so that charity could also seem exceptionally good by the logic of the larder.