OscarD

981 karmaJoined Working (0-5 years)Oxford, UK

Comments
155

Good point, I think if X-risk is very low it is less urgent/important to work on (so the conditional works in that direction I reckon). But I agree that the inverse - if X-risk is very high, it is very urgent/important to work on - isn't always true (though I think it usually is - generally bigger risks are easier to work on).

Thanks, fixed. I was basing this off of Table 1 (page 20) in the original but I suppose Leopold meant the release year there.

fyi for everyone interested in Leopold's report but intimidated by it's length, I am currently writing a detailed summary, and expect to post it to the Forum in the next day or two. I will update this comment once I have done so.

I would be interested in @Greg_Colbourn's thoughts here! Possibly part of the value is in generating discussion and publicly defending a radical idea, rather than just the monetary EV. But if so maybe a smaller bet would have made sense.

When you say 'AI concerned' does that mean you would be interested in taking Greg's side of the bet (that everyone will die)? That is my interpretation, but the fact that you didn't say this explicitly makes me unsure.

Great to see this public demonstration of both of your respective beliefs!

Thanks for the comment (and welcome to the Forum! :) ). Yeah using conditional oughts seems like a pretty reasonable approach to me, though of course has some convenience cost when the Z is very widely shared ('you ought to fix your brakes over drive without brakes in order to not crash') so can perhaps then be implied.

Great post, and an interesting counterfactual history!

Hooray for moral trade.

Evolutionary debunking arguments feel relevant re the causal history of our beliefes.

One thing I have heard is that having long-ish application stages provides value by getting more people to think about relevant topics (I have heard this from at least two orgs I think). E.g. having several hundred people spend an hour writing a paragraphs about an AI safety topic might be good by virtue of generally having more people think more about this being good. I haven't seen a write-up weighing up the pros and cons of this though. I agree this can be bad for applicants.

Load more