Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7156 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
844

Topic contributions
1

Very much in favor of posts clarifying that cause neutrality doesn't require value neutrality or deference to others' values.

I very much appreciate that you are thinking about this, and the writing is great. That said, without trying to address the arguments directly, I worry that the style here is justifying a conclusion you've come to and explores analogies you like rather than exploring the arguments and trying to decide what side to be on, and it fails to embrace scout mindset sufficiently to be helpful.

I think that replaceability is very high, so the counterfactual impact is minimal. But that said, there is very little possibility in my mind that even helping with RLHF for compliance with their "safety" guidelines is more beneficial for safety than for accelerating capabilities racing, so any impact is negative.

I don't think multiperson disagreements are in general a tractable problem for one hour sessions. It sounds like you need someone in charge to enable disagree then commit, rather than a better way to argue.

How much of the money raised by effectiv-spenden, etc. is a essentially pass through to Givewell? (I know Israel now has a similar initiative, but is in large part passing the money to the same orgs.)

I'm cheating a bit, because both of these are well on their way, but two big current goals:

  1. Get Israel to iodize its salt!
  2. Run an expert elicitation on Biorisk with RAND and publish it.
     

Not predictions as such, but lots of current work on AI safety and steering is based pretty directly on paradigms from Yudkowsky and Christiano - from Anthropic's constitutional AI to ARIA's Safeguarded AI program. There is also OpenAI's Superalignment reserach, which was attempting to build AI that could solve agent foundations - that is, explicitly do the work that theoretical AI safety research identified. (I'm unclear whether the last is ongoing or not, given that they managed to alienate most of the people involved.)

I strongly agree that you need to put your own needs first, and think that your level of comfort with your savings and ability to withstand foreseeable challenges is a key input. My go-to in general, is that the standard advice of keeping 3-6 months of expenses is a reasonable goal - so you can and should give, but until you have saved that much, you should at least be splitting your excess funds between savings and charity. (And the reason most people don't manage this has a lot to do with lifestyle choices and failure to manage their spending - not just not having enough income. Normal people never have enough money to do everything they'd like to; set your expectations clearly and work to avoid the hedonic treadmill!)

To follow on to your point, as it relates to my personal views, (in case anyone is interested,) it's worth quoting the code of Jewish law. It introduces its discussion of Tzedakah by asking how much one is required to give. "The amount, if one has sufficient ability, is giving enough to fulfill the needs of the poor. But if you do not have enough, the most praiseworthy version is to give one fifth, the normal amount is to give a tenth, and less than that is a poor sign." And I note that this was written in the 1500s, where local charity was the majority of what was practical; today's situation is one where the needs are clearly beyond any one person's ability - so the latter clauses are the relevant ones.

So I think that, in a religion that prides itself on exacting standards and exhaustive rules for the performance of mitzvot, this is endorsing exactly your point: while giving might be a standard, and norms and community behavior is helpful in guiding behavior, the amount to give is always a personal and pragmatic decision, not a general rule.

Load more