I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it's nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
The point that I was initially trying to make was only that I don't think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health--or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
I am pretty unmoved by this distinction, and based on the link above, it seems that Greaves is really just making the point that a longtermism mindset incentivizes us to find robustly good interventions, not that it actually succeeds. I think it's pretty easy to make the cluelessness case about AI alignment as a cause area, for example. Seems quite plausible to me that a lot of so-called alignment work is actually just serving to speed capabilities. Also seems to me that you could align an AI to human values and find that human values are quite bad. Or you could successfully align AI enough to avoid extinction and find that the future is astronomically bad and extinction would have been preferable.
It seems like the whole premise of this debate is (rightly) based on the idea that there is in fact a necessary trade-off between human and animal welfare, no? I.e. if we give the $100 million towards the most cost-effective human focused intervention we can think of then we are necessarily not giving it towards the most cost-effective animal-focused intervention we can think of, no? Of course it is theoretically possible that there exists some intervention which is simultaneously the most cost-effective intervention on both a humans-per-dollar and animals-per-dollar but that seems extremely unlikely.
I am curious where you think it stops. What standard of living are people "obligated" to sink to in order to help strangers? I don't deny any of this is good or praiseworthy, but it doesn't seem to have any limiting principle. Should everyone live in squalor, forego a family/deep friendships, and not pursue any passions because time and money can always be spent saving another stranger?
Yes but I think it's significant that one is morally entitled, not just legally entitled. In other words, imagine replacing pressing the button with actually doing the work to earn 6k. Do you think you are, for example, obligated to drive 12 hours each way in order to pull a drowning child out of a lake? The amount of money in your bank account is endogenous to how much work and effort you put into filling it, whereas I think the way this thought experiment is framed makes it sound like that money fell from the sky.
If you think you are in fact obligated drive 24 hours/increase your own risk of death by taking on a risky job/give up time with your children in order to save a stranger, then I am more sympathetic to the idea that you are obligated give up money for that stranger. However I do not share that intuition.
The difference is that property is distributed based on morally significant, non-random, voluntary activities. See Governing Least by Dan Moller for a moral defense or property. This implies that a) you are entitled to your property because you earned it through morally legitimate means and b) It is a good thing for society more broadly to accept the moral legitimacy of property that is earned through creation, discovery, etc., so the norm that people in general are entitled to their property in most cases is pro-social.
In contrast to most forms of property, accepting money for murder is not a defensible basis for property. This means that a) you are not entitled to that money, and b) supporting such a norm would be bad.
There are of course cases in which you might acquire property in a non-morally-legitimate way. I think the distinction there is far more tenuous, but that is not the case for the bulk of most people's money.
You don't think a lot of non-EA altruistic actions involve saving lives??