Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).
As a first-pass model: removing person-years from the present doesn't reduce the number of animals harmed before a solution is found; it just makes the solution arrive later.
I doubt that is a good way to model this (for farmed animals). Consider the extremes:
So as a first approximation, we should just assume the amount of suffering in factory farms increases monotonically with the human population, since we can be fairly confident in these three data points (no suffering with no humans; lots of suffering with 8B humans; maybe more, maybe less suffering at the Malthusian limit). Of course that would be an oversimplified model. But it is a starting point, and to get from that starting point to "adding people on the margin reduces or doesn't affect expected farmed animal suffering" needs a better argument.
Here are the three most popular comments as of now. One, "giving to effective charities can create poverty in the form of exploited charity workers":
I’ve worked for a non-profit in the past at an unlivable wage. One of my concerns when I am looking at charities to give to and hearing that we need to give only to those that are most efficient, is that we are creating more poverty by paying the workers at some charities wages that they can’t live on.
Two, "US charities exist because the rich aren't taxed enough":
Our whole system of charity in the US has developed because the wealthy aren’t taxed enough, and hence our government doesn’t do enough. Allowing the rich to keep so much wealth means we don’t have enough national or state level funding for food, housing, healthcare, or education. We also don’t have adequate government programs to protect the environment, conduct scientific research, and support art and culture. I’m deluged every day by mail from dozens of organizations trying to fill these gaps. But their efforts will never have the impact that well planned longterm government action could.
Three, "I just tip generously":
Lately I’ve been in the mindset of giving money to anyone who clearly has less than me when I have the opportunity. This mostly means extra generous tipping (when I know the tips go to the workers and not a corporation). Definitely not efficient, but hopefully makes a tiny difference.
These just seem really weak to me. What other options did the underpaid charity workers have, that were presumably worse than working for the charity? Even if the US taxed the rich very heavily, there would still be lots of great giving opportunities (e.g., to help people in other countries, and to help animals everywhere). Tipping generously is sort of admirable, but if it's admittedly inefficient, why not do the better thing instead? I guess these comments just illustrate that there is a lot of room for the core ideas of effective altruism (and basic instrumental rationality) to gain wider adoption.
I don't understand why so many are disagreeing with this quick take, and would be curious to know whether it's on normative or empirical grounds, and if so where exactly the disagreement lies. (I personally neither agree nor disagree as I don't know enough about it.)
From some quick searching, Lessig's best defence against accusations that he tried to steal an election seems to be that he wanted to resolve a constitutional uncertainty. E.g.,: "In a statement released after the opinion was announced, Lessig said that 'regardless of the outcome, it was critical to resolve this question before it created a constitutional crisis'. He continued: 'Obviously, we don’t believe the court has interpreted the Constitution correctly. But we are happy that we have achieved our primary objective -- this uncertainty has been removed. That is progress.'"
But it sure seems like the timing and nature of that effort (post-election, specifically targeting Trump electors) suggest some political motivation rather than purely constitutional concerns. As best as I can tell, it's in the same general category of efforts as Giuliani's effort to overturn the 2020 election, though importantly different in that Giuliani (a) had the support and close collaboration of the incumbent, (b) seemed to actually commit crimes doing so, and (c) did not respect court decisions the way Lessig did.
That still does not seem like reinventing the wheel to me. My read of that post is that it's not saying "EAs should do these analyses that have already been done, from scratch" but something closer to "EAs should pay more attention to strategies from development economics and identify specific, cost-effective funding opportunities there". Unless you think development economics is solved, there is presumably still work to be done, e.g., to evaluate and compare different opportunities. For example, GiveWell definitely engages with experts in global health, but still also needs to rigorously evaluate and compare different interventions and programs.
And again, the article mentions development economics repeatedly and cites development economics texts -- why would someone mention a field, cite texts from a field, and then suggest reinventing it without giving any reason?
I don't think you're wrong exactly, but AI takeover doesn't have to happen through a single violent event, or through a treacherous turn or whatever. All of your arguments also apply to the situation with H sapiens and H neanderthalensis, but those factors did not prevent the latter from going extinct largely due to the activities of the former:
The fact that those considerations were not enough to prevent neanderthal extinction is one reason to think they are not enough to prevent AI takeover, although of course the analogy is not perfect or conclusive, and it's just one reason among several. A couple of relevant parallels include:
I don't think people object to these topics being heated either. I think there are probably (at least) two things going on:
Either way, I don't think the problem is centrally about exclusionary beliefs, and I also don't think it's centrally about disagreement. But anyway, it sounds like we mostly agree on the important bits.
Yeah, but as you point out below, that simple model makes some unrealistic assumptions (e.g., that a solution will definitely be found that fully eliminates farmed animal suffering, and that a person starts contributing, in expectation, to solving meat eating at age 0). So it still seems to me that a better argument is needed to shift the prior.