AS

Ariel Simnegar 🔸

2237 karmaJoined

Bio

Participation
3

I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.

My substack: https://arielsimnegar.substack.com/

Comments
179

I agree with that caveat! Though I suspect that the downstream effects of the population increase/decrease channel dominate, especially for animal welfare.

Tangentially, this conversation illustrates how (if person-affecting views are false), the sign of Family Empowerment Media (FEM) is the opposite of AMF and other life-saving charities. FEM prevents human lives and AMF saves lives, and they have the opposite downstream effects on human lived experience, farmed animal welfare, and so on.

Therefore, I would not suggest anyone split their donations between life-preventing charities like FEM and lifesaving charities like AMF, because their effects will offset each other. People who are sympathetic to FEM (as opposed to AMF) because of farmed animal effects should probably just donate to animal welfare charities which I would expect to help animals even more.

Your writings on this subject often emphasize an extremely high regard for the value of people making their own reproductive decisions, even when the weights are (as in this case) a human's life and an enormous amount of farmed animal suffering.

When would the other stakes be sufficiently large for you to endorse preventing someone from making their own reproductive decision?

For example, let's say Hitler's mother could have been forced to have an abortion, preventing Hitler's birth. Would you say that's a tradeoff worth making, with regret?

Or let's say we know Alice's son Bob, were he to be born, will save 1 billion lives by preventing a nuclear war, and Alice currently intends to abort Bob. Would you say forcing Alice to carry Bob to term would be a tradeoff worth making, with regret about the forced birth?

The reason why I ask is because my intuition is that while reproductive autonomy is very important, it seems to me that there are always ways to up the stakes such that it can be the right thing to compromise on that principle, with regrets. I feel like there's something I'm missing in my understanding of your view which has caused us historically to talk past each other.

Brian Tomasik has argued that if (a) wild animals have negative welfare on net, and (b) humans reduce wild animal populations, then that may swamp even the horrific scale of factory farming.

I personally think the meat eater problem is very serious, and the best way around it is to just donate to effective animal welfare charities! Those donations would be orders of magnitude more cost-effective than the best human-centered alternatives.

I think some critiques of GVF/OP in this comments section could have been made more warmly and charitably.

The main funder of a movement's largest charitable foundation is spending hours seriously engaging with community members' critiques of this strategic update. For most movements, no such conversation would occur at all.

Some critics in the comments are practicing rationalist discussion norms (high decoupling & reasoning transparency) and wish OP's communications were more like that too. However, it seems there's a lot we don't know about what caused GFV/OP leadership to make this update. Dustin seems very concerned about GFV/OP's attack surface and conserving the bandwidth of their non-monetary resources. He's written at length about how he doesn't endorse rationalist-level decoupling as a rule of discourse. Given all of this, it's understandable that from Dustin's perspective, he has good reasons for not being as legible as he could be. Dishonest outside actors could quote statements or frame actions far more uncharitably than anything we'd see on the EA Forum.

Dustin is doing the best he can to balance between explaining his reasoning and adhering to legibility constraints we don't know about in order to engage with the rest of the community. We should be grateful for that.

Thanks for the post, Vasco!

From reading your post, your main claim seems to be: The expected value of the long-term future is similar whether it's controlled by humans, unaligned AGI, or another Earth-originating intelligent species.

If that's a correct understanding, I'd be interested in a more vigorous justification of that claim. Some counterarguments:

  1. This claim seems to assume the falsity of the orthogonality thesis? (Which is fine, but I'd be interested in a justification of that premise.)
  2. Let's suppose that if humanity goes extinct, it will be replaced by another intelligent species, and that intelligent species will have good values. (I think these are big assumptions.) Priors would suggest that it would take millions of years for this species to evolve. If so, that's millions of years where we're not moving to capture universe real estate at near-light-speed, which means there's an astronomical amount of real estate which will be forever out of this species' light cone. It seems like just avoiding this delay of millions of years is sufficient for x-risk reduction to have astronomical value.

You also dispute that we're living in a time of perils, though that doesn't seem so cruxy, since your main claim above should be enough for your argument to go through either way. Still, your justification is that "I should be a priori very sceptical about claims that the expected value of the future will be significantly determined over the next few decades". There's a lot of literature (The Precipice, The Most Important Century, etc) which argues that we have enough evidence of this century's uniqueness to overcome this prior. I'd be curious about your take on that.

(Separately, I think you had more to write after the sentence "Their conclusions seem to mostly follow from:" in your post's final section?)

(The following is mostly copied from this thread due to a lack of time. I unfortunately can't commit to much engagement on replies to this.)

The sign of the effect of MSI seems to rely crucially on a very high credence in the person-affecting view, where the interests of future people are not considered.

Since 2000, MSI has averted one maternal death by preventing on average 502 unintended pregnancies. Even if only ~20% of these unintended pregnancies would have counterfactually been carried to term (due to abortion, replacement, and other factors), that still means preventing one maternal death prevents the creation of ~100 human beings. In other words, MSI's intervention prevents ~100x as much human life experience as it creates by averting a maternal death. If one desires to maximize expected choice-worthiness under moral uncertainty, assuming the value of human experience is independent of the person-affecting view, one must be ~99% confident that the person-affecting view is true for MSI to be net positive.

However, many EAs, especially longtermists, argue that the person-affecting view is unlikely to be true. For example, Will MacAskill spends most of Chapter 8 of What We Owe The Future arguing that "all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections". Toby Ord writes in The Precipice p. 263 that "Any plausible account of population ethics will involve…making sacrifices on behalf of merely possible people."

If there's a significant probability that the person-affecting view may be false, then MSI's effect could in reality be up to 100x as negative as its effect on mothers is positive.

I worry about this line of reasoning because it's ends-justify-the-means thinking.

Let's say billions of people were being tortured right now, and some longtermists wrote about how this isn't even a feather in the scales compared to the cosmic endowment. These longtermists would be accused of callously gambling billions of years on suffering on a theoretical idea. I can just imagine The Guardian's articles about how SBF's naive utilitarianism is alive and well in EA.

The difference between the scenario for animals and the scenario for humans is that the former is socially acceptable but the latter is not. There isn't a difference in the actual badness.

Separately, to engage with the utilitarian merits of your argument, my main skepticism is an unwillingness to go all-in on ideas which remain theoretical when the stakes are billions of years of torture. (For example, let's say we ignore factory farming, and then there's a still unknown consideration which prevents us or anyone else from accessing the cosmic endowment. That scares me.) Also, though I'm not a negative utilitarian, I think I take arguments for suffering-focused views more seriously than you might.

I'd like to give some context for why I disagree.

Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he's admitted that "I truly sucked back then". However, I think EA causes are more important than political differences. It's valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we're being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.

I also think Hanania has excellent takes on most issues, and that's because he's the most intellectually honest blogger I've encountered. I think Hanania likes EA because he's willing to admit that he's imperfect, unlike EA's critics who would rather feel good about themselves than actually help others.

More broadly, I think we could be doing more to attract people who don't hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:

  • In this era of political polarization, It would be a travesty for EA issues to become partisan.
  • All else equal, political diversity is good for community epistemics. In that regard, it should be encouraged for much the same reason that cultural and racial diversity are encouraged.
  • If we want EA to be a global social movement, we need to show that one can be EA even if they hold beliefs on other issues we find repugnant. I live in Panama for my job. When I arrived here, I had a culture shock from how backwards many people's views are on racism and sexism. If we can't be friends with the person next door with bad views, how are we going to make allies globally?

Funnily enough, that verse is often referenced to me by religious Jews when I talk about how many EAs donate >>20%.

Load more