RW

Robert_Wiblin

4921 karmaJoined

Comments
456

Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.

The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.

The literature on a 'singleton' is in part addressing this issue.

Because there's so much uncertainty about all this, it seems like an overly-confident claim that it's extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.

Ah great, glad I got it!

I think I had always assumed that the argument for x-risk relied on the possibility that the annual risk of extinction would eventually either hit or asymptote to zero. If you think of life spreading out across the galaxy and then other galaxies, and then being separated by cosmic expansion, then that makes some sense.

To analyse it the most simplistic way possible — if you think extinction risk has a 10% chance of permanently going to 0% if we make it through the current period, and a 90% chance of remaining very high even if we make it through the current period, then extinction reduction takes a 10x hit to its cost-effectiveness from this effect. (At least that's what I had been imagining.)

I recall there's an Appendix to The Precipice where Ord talks about this sort of thing. At least I remember that he covers the issue that it's ambiguous whether a high or low level of risk today makes the strongest case for working to reduce extinction being cost-effective. Because as I think you're pointing out above — while a low risk today makes it harder to reduce the probability of extinction by a given absolute amount, it simultaneously implies we're more likely to make it through future periods if we don't go extinct in this one, raising the value of survival now.

I'm not much at maths so I found this hard to follow.

Is the basic thrust that reducing the chance of extinction this year isn't so valuable if there remains a risk of extinction (or catastrophe) in future because in that case we'll probably just go extinct (or die young) later anyway?

I'm sympathetic to this, but so people don't think this will be trivial, note the 80k Podcast did produce some video episodes, and 60 extracts that went out on YouTube and Twitter and I think some other places. They got only a middling level of engagement and it didn't go up much over time.

Some nearby podcasts have made video episodes and had a lot of success (e.g .The Lunar Society), while others it doesn't seem like it has become a major way people consume the content (e.g. FLI, EconTalk).

So whether this is a high priority seems to depend on whether you can succeed at the content marketing aspect.

Yes sorry I don't meant to 'explain away' any large shift (if it occurred), the anti- side may just have been more persuasive here.

Thanks for this post, good summary!

I recall reading that in debates like this, the audience usually moves against the majority position.

There's a simple a priori reason one might expect this: if to begin with there are twice as many people who agree with X as disagree with it, then the anti- side has twice as many people available who they can plausibly persuade to switch to their view.

If 10% of both groups change their minds, you'd go from 66.6% agreeing to 63.3% agreeing.

(Note that even a very fringe view benefits from equal time in a debate format, in a way that isn't the case in how much it gets covered in the broader social conversation.)

Would be neat if anyone could google around and see if this is a real phenomenon.

It's sad to think how much this will set back the research agenda he was a part of. Sometimes one researcher really can move forward a field.

Bear will be missed by many including me.

Thanks for the suggestion David! We're discussing adding this as a premium feature — perhaps activated only for Giving What We Can members.

Hi Pagw — in case you haven't seen it here's my November 2022 reply to Oli H re Sam Bankman-Fried's lifestyle:

"I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.

I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.

It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam's reference to 'nice apartments' in the interview:

"I don’t know, I kind of like nice apartments. ... I’m not really that much of a consumer exactly. It’s never been what’s important to me. And so I think overall a nice place is just about as far as it gets."

Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.

In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the 'crypto' social scene. That may help to explain why this issue never came up in casual conversation.

Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible."

Hi Misha — with this post I was simply trying to clarify that I understood and agreed with critics on the basic considerations here, in the face of some understandable confusion about my views (and those of 80,000 Hours).

So saying novel things to avoid being 'nonsubstantial' was not the goal.

As for the conclusion being "plausibly quite wrong" — I agree that a plausible case can be made for both the certain $1 billion or the uncertain $15 billion, depending on your empirical beliefs. I don't consider the issue settled, the points you're making are interesting, and I'd be keen to read more if you felt like writing them up in more detail.[1]

The question is sufficiently complicated that it would require concentrated analysis by multiple people over an extended period to do it full justice, which I'm not in a position to do.

That work is most naturally done by philanthropic program managers for major donors rather than 80,000 Hours.

I considered adding in some extra math regarding log returns and what that would imply in different scenarios, but opted not to because i) it would take too long to polish, ii) it would probably confuse some readers, iii) it could lead to too much weight being given to a highly simplified model that deviates from reality in important ways. So I just kept it simple.


  1. I'd just note that maintaining a controlling stake in TSMC would tie up >$200 billion. IIRC that's on the order of 100x as much as has been spent on targeted AI alignment work so far. For that to be roughly as cost-effective as present marginal spending on AI or other existential risks, it would have to be very valuable indeed (or you'd have to think current marginal spending was of very poor value). ↩︎

Load more