EG

Erich_Grunewald 🔸

Researcher @ Institute for AI Policy and Strategy
2489 karmaJoined Working (6-15 years)Berlin, Germanywww.erichgrunewald.com

Bio

Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).

Comments
280

Yeah, but as you point out below, that simple model makes some unrealistic assumptions (e.g., that a solution will definitely be found that fully eliminates farmed animal suffering, and that a person starts contributing, in expectation, to solving meat eating at age 0). So it still seems to me that a better argument is needed to shift the prior.

As a first-pass model: removing person-years from the present doesn't reduce the number of animals harmed before a solution is found; it just makes the solution arrive later.

I doubt that is a good way to model this (for farmed animals). Consider the extremes:

  • If we reduce the human population size to 0, we reduce the amount of suffering of farmed animals to zero, since there will be no more farmed animals
  • If we increase the human population to the Malthusian limit, we increase the amount of suffering of farmed animals in the short and probably medium terms, and may or may not decrease farmed animal suffering in the longer term. One reason to think we would increase the amount of suffering by adding many more people is that, historically, farmed animal suffering and human population have likely been closely correlated. At any rate, the amount of farmed animal suffering in this scenario is likely nonzero.

So as a first approximation, we should just assume the amount of suffering in factory farms increases monotonically with the human population, since we can be fairly confident in these three data points (no suffering with no humans; lots of suffering with 8B humans; maybe more, maybe less suffering at the Malthusian limit). Of course that would be an oversimplified model. But it is a starting point, and to get from that starting point to "adding people on the margin reduces or doesn't affect expected farmed animal suffering" needs a better argument.

Here are the three most popular comments as of now. One, "giving to effective charities can create poverty in the form of exploited charity workers":

I’ve worked for a non-profit in the past at an unlivable wage. One of my concerns when I am looking at charities to give to and hearing that we need to give only to those that are most efficient, is that we are creating more poverty by paying the workers at some charities wages that they can’t live on.

Two, "US charities exist because the rich aren't taxed enough":

Our whole system of charity in the US has developed because the wealthy aren’t taxed enough, and hence our government doesn’t do enough. Allowing the rich to keep so much wealth means we don’t have enough national or state level funding for food, housing, healthcare, or education. We also don’t have adequate government programs to protect the environment, conduct scientific research, and support art and culture. I’m deluged every day by mail from dozens of organizations trying to fill these gaps. But their efforts will never have the impact that well planned longterm government action could.

Three, "I just tip generously":

Lately I’ve been in the mindset of giving money to anyone who clearly has less than me when I have the opportunity. This mostly means extra generous tipping (when I know the tips go to the workers and not a corporation). Definitely not efficient, but hopefully makes a tiny difference.

These just seem really weak to me. What other options did the underpaid charity workers have, that were presumably worse than working for the charity? Even if the US taxed the rich very heavily, there would still be lots of great giving opportunities (e.g., to help people in other countries, and to help animals everywhere). Tipping generously is sort of admirable, but if it's admittedly inefficient, why not do the better thing instead? I guess these comments just illustrate that there is a lot of room for the core ideas of effective altruism (and basic instrumental rationality) to gain wider adoption.

Your past self is definitely wrong -- GovAI does way more policy work than technical work -- but maybe that's irrelevant since you prioritize advocacy work anyway (and GovAI does little of that).

I don't understand why so many are disagreeing with this quick take, and would be curious to know whether it's on normative or empirical grounds, and if so where exactly the disagreement lies. (I personally neither agree nor disagree as I don't know enough about it.)

From some quick searching, Lessig's best defence against accusations that he tried to steal an election seems to be that he wanted to resolve a constitutional uncertainty. E.g.,: "In a statement released after the opinion was announced, Lessig said that 'regardless of the outcome, it was critical to resolve this question before it created a constitutional crisis'. He continued: 'Obviously, we don’t believe the court has interpreted the Constitution correctly. But we are happy that we have achieved our primary objective -- this uncertainty has been removed. That is progress.'"

But it sure seems like the timing and nature of that effort (post-election, specifically targeting Trump electors) suggest some political motivation rather than purely constitutional concerns. As best as I can tell, it's in the same general category of efforts as Giuliani's effort to overturn the 2020 election, though importantly different in that Giuliani (a) had the support and close collaboration of the incumbent, (b) seemed to actually commit crimes doing so, and (c) did not respect court decisions the way Lessig did.

That still does not seem like reinventing the wheel to me. My read of that post is that it's not saying "EAs should do these analyses that have already been done, from scratch" but something closer to "EAs should pay more attention to strategies from development economics and identify specific, cost-effective funding opportunities there". Unless you think development economics is solved, there is presumably still work to be done, e.g., to evaluate and compare different opportunities. For example, GiveWell definitely engages with experts in global health, but still also needs to rigorously evaluate and compare different interventions and programs.

And again, the article mentions development economics repeatedly and cites development economics texts -- why would someone mention a field, cite texts from a field, and then suggest reinventing it without giving any reason?

It would be helpful if you mentioned who the original inventor was.

I don't see how this is reinventing the wheel? The post makes many references to development economics (11 mentions to be precise). It was not an instance of independently developing something that ended up being close to development economics.

I don't think you're wrong exactly, but AI takeover doesn't have to happen through a single violent event, or through a treacherous turn or whatever. All of your arguments also apply to the situation with H sapiens and H neanderthalensis, but those factors did not prevent the latter from going extinct largely due to the activities of the former:

  1. There was a cost to violence that humans did against neanderthals
  2. The cost of using violence was not obviously smaller than the benefits of using violence -- there was a strong motive for the neanderthals to fight back, and using violence risked escalation, whereas peaceful trade might have avoided those risks
  3. There was no one human that controlled everything; in fact, humans likely often fought against one another
  4. You allow for neanderthals to be less capable or coordinated than humans in this analogy, which they likely were in many ways

The fact that those considerations were not enough to prevent neanderthal extinction is one reason to think they are not enough to prevent AI takeover, although of course the analogy is not perfect or conclusive, and it's just one reason among several. A couple of relevant parallels include:

  • If alignment is very hard, that could mean AIs compete with us over resources that we need to survive or flourish (e.g., land, energy, other natural resources), similar to how humans competed over resources with neanderthals
  • The population of AIs may be far larger, and grow more rapidly, than the population of humans, similar to how human populations were likely larger and growing at a faster rate than those of neanderthals

I don't think people object to these topics being heated either. I think there are probably (at least) two things going on:

  1. There's some underlying thing causing some disagreements to be heated/emotional, and people want to avoid that underlying thing (that could be that it involves exclusionary beliefs, but it could also be that it is harmful in other ways)
  2. There's a reputational risk in being associated with controversial issues, and people want to distance themselves from those for that reason

Either way, I don't think the problem is centrally about exclusionary beliefs, and I also don't think it's centrally about disagreement. But anyway, it sounds like we mostly agree on the important bits.

Load more