ER

Eli Rose

Program Officer, Global Catastrophic Risks Capacity-Building @ Open Philanthropy
1988 karmaJoined Working (6-15 years)

Bio

GCR capacity-building grantmaking and projects at Open Phil.

Posts
28

Sorted by New
5
· · 1m read
105

Sequences
1

Open Phil EA/LT Survey 2020

Comments
158

What is the base rate for Chinese citizens saying on polls that the Chinese government should regulate X, for any X?

I thought this was interesting & forceful, and am very happy to see it in public writing.

The full letter is available here — was recently posted online as part of this tweet thread.

(meta musing) The conjunction of the negations of a bunch of statements seems a bit doomed to get a lot of disagreement karma, sadly. Esp. if the statements being negated are "common beliefs" of people like the ones on this forum.

I agreed with some of these and disagreed with others, so I felt unable to agreevote. But I strongly appreciated the post overall so I strong-upvoted.

  1. Similar to that of our other roles, plus experience running a university group as an obvious one — I also think that extroversion and proactive communication are somewhat more important for these roles than for others.
  2. Going to punt on this one as I'm not quite sure what is meant by "systems."
  3. This is too big to summarize here, unfortunately.
  1. Check out "what kinds of qualities are you looking for in a hire" here. My sense is we index less on previous experience than many other organizations (though it's still important). Experience juggling many tasks, prioritize, and syncing up with stakeholders jumps to mind. I have a hypothesis that consultant experience would be helpful for this role, but that's a bit conjectural.
  2. This is a bit TBD — happy to chat more further down the pipeline with any interested candidates.
  3. We look for this in work tests and in previous experience.
  1. The CB team continuously evaluates the track record of grants we've made when they're up for renewal, and this feeds into our sense of how good programs are overall. We also spend a lot of time keeping up with what's happening in CB and in x-risk generally, and this feeds into our picture of how well CB projects are working.
  2. Check out "what kinds of qualities are you looking for in a hire" here.
  3. Same answer as 2.

Empirically, in hiring rounds I've previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn't make a hire. I've also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).

I'm sympathetic to the take "that seems pretty weird." It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar has been somewhat too high in the past, though this is speaking just for myself. I think when you have a lot of strategic uncertainty, as GCR teams often do, that pushes towards a higher hiring bar as you need people who have a wide variety of skills.

I'd probably also gently push back against the notion that our hiring pool is extremely deep, though that's obviously relative. I think e.g. our TAIS roles will likely get many fewer applicants than roles for similar applicants doing safety research at labs, for a mix of reasons including salience to relevant people and the fact that OP isn't competitive with labs on salary.

(As of right now, TAIS has only gotten 53 applicants across all its roles since the ad went up, vs. governance which has gotten ~2x as many — though a lot of people tend to apply right around the deadline.)

Thanks for the reply.

I think "don't work on climate change[1] if it would trade off against helping one currently identifiable person with a strong need" is a really bizarre/undesirable conclusion for a moral theory to come to, since if widely adopted it seems like this would lead to no one being left to work on climate change. The prospective climate change scientists would instead earn-to-give for AMF.

  1. ^

    Or bettering relations between countries to prevent war, or preventing the rise of a totalitarian regime, etc.

Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1/10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.

 

It seems like a society where everyone took contractualism to heart might have a hard time coordinating on any large moral issues where the difference any one individual makes is small, including non-x-risk ones like climate change or preventing great power war. What does the contractualist position recommend on these issues?

(In climate change, it's plausibly the case that "every little bit helps," while in preventing war between great powers outcomes seem much more discontinuous — not sure if this matters.)

Load more