D

DanteTheAbstract

8 karmaJoined

Comments
4

Great article.

Of the cases you outlined (Exogenous end and Endogenous end) is it prudent to assume we are in the second one?

My thinking is that by definition if exogenous end point is true then it is something we cannot affect. We can’t move that end date forward or back. The endogenous case seems to be where actions or omissions have actual consequences. It’s in this case where we could make things much worse or better.

In your recent 80k podcast almost all the work referenced seems to be targeted at the US and EU (except the Farm animal welfare in Asia section).

  • What is the actual geographic target of the work that’s being funded?
  • Is there work being done/planed to look at animal welfare funding opportunities more globally?

I don't think this argument is sound. In your EV calculation you're including the expected deaths over the thousand year period but excluding the expected lives over that same period. There’s an asymmetry in this comparison. 

Also, I don’t see how x number deaths of a given species could be worse than the extinction of that species. The way I see it the first choice is save k lives over a thousand years, but the second choice is save k less lives over the same period and loose all future lives after that, forever.

The government should defend against the second case.

In the summary you mention that "Skepticism of formal philosophy is not enough". I’m new to the forum, could you (or anyone else) clarify what is meant by formal philosophy? Is the statement equivalent to just saying "Skepticism of philosophy is not enough" or "Skepticism of philosophical reasoning is not enough"? 

Also, in the section "Increasing Animal Welfare Funding would Reduce OP’s Influence on Philanthropists" you make a comparison of AI x-risk and FAM. While AI x-risk reduction is also a niche cause area, I think you underestimate how niche FAW is relative to AI x-risk. The potential alienating risk from significant allocation to x-risk isn’t the same as that of FAW since AI x-risk is still largely a story about the impact this would have on humans and their societies.

I’m not saying this is the correct view but the one that would be generally held by most potential funders. 

 

In general the utilitarian case for your main points seem strong. Great post.