L

Linch

@ EA Funds
25690 karmaJoined Working (6-15 years)openasteroidimpact.org

Posts
72

Sorted by New
8
Linch
· · 1m read
22

Comments
2733

I previously thought Mark Fuentes was someone ~ unaffiliated with this community. The article seemed to present enough evidence that I no longer believe this. (It also made me downwards somewhat on the claims in the Fuentes post, but not enough to get to pre-reading-the-post levels).

That's an odd prior. I can see a case for a prior that gets you to <10^-6, maybe even 10^-9, but how can you get to substantially below 10^-9 annual with just historical data???

Sapiens hasn't been around for that long for longer than a million years! (and conflict with homo sapiens or other human subtypes still seems like a plausible reason for extinction of other human subtypes to me). There have only been maybe 4 billion species total in all of geological history! Even if you have almost certainty that literally no species has ever died of conflict, you still can't get a prior much lower than 1/4,000,000,000! (10^-9). 

Anthropic issues questionable letter on SB 1047 (Axios). I can't find a copy of the original letter online. 

The former. I think it should be fairly intuitive if you think about the shape of the distribution you're drawing from. Here's the code, courtesy of Claude 3.5. [edit: deleted the quote block with the code because of aesthetics, link should still work].

I think Toby's use of "evenly split" is a bit of a stretch in 2024 with the information available, but lab leak is definitely still plausible. To quote Scott in the review:

Fourth, for the first time it made me see the coronavirus as one of God’s biggest and funniest jokes. Think about it. Either a zoonotic virus crossed over to humans fifteen miles from the biggest coronavirus laboratory in the Eastern Hemisphere. Or a lab leak virus first rose to public attention right near a raccoon-dog stall in a wet market. Either way is one of the century’s biggest coincidences, designed by some cosmic joker who wanted to keep the debate [...] acrimonious for years to come.

I think lab leak is now a minority position among people who looked into it, but it's not exactly a fringe view. I would guess at least some US intelligence agencies still think lab leak is more likely than not, for example.

And, like, idk, man.  130 is pretty smart but not "famous for their public intellectual output" level smart.

Yeah "2 sds just isn't that big a deal" seems like an obvious hypothesis here ("People might over-estimate how smart they are" is, of course, another likely hypothesis).

Also of course OP was being overly generous by assuming that it's a normal distribution centered around 128. If you take a bunch of random samples of a normal distribution, and only look at subsamples with median 2 sds out, in approximately ~0 subsamples will you find it equally likely to see + 0 sds and +4 sds. 

Thank you for the article! I've been skeptical of the general arguments for progress (from a LT perspective) for several years but never managed to articulate a model quite as simple/clear as yours. 

For instance, they might be able to lengthen our future by bringing forward the moment we develop technologies to protect us from natural threats to our survival. This is an intriguing idea, but there are challenges to getting it to work: especially since there isn’t actually much natural extinction risk for progress to reduce, whereas progress appears to be introducing larger anthropogenic risks
...
There might be other claims about the value of progress that are unaffected. For example, these considerations don’t directly undermine the argument that progress is better than stasis, and thus that if progress is fragile, we need to protect it. That may be true even if humanity does eventually bring about its own end, and even if our progress brings it about sooner.

When I've had these debates with people before, the most compelling argument for me (other than progress being good under a wide range of commonsensical assumptions + moral/epistemic uncertainty) is a combination of these arguments plus a few subtleties. It goes something like this:

  1. Progress isn't guaranteed
  2. It's not actually very plausible/viable to maintain a society that's ~flat, in practice you're either going forwards or backwards.
  3. Alternatively, people paint a Red Queen story where work towards advancement is necessary to fight the natural forces of decline. If you (speaking broadly) don't meaningfully advance, civilizational decline will set in, likely in the span of mere decades or centuries
    1. Sometimes people point to specific issues, eg institiutional rot in specific societies. Or global demographic decline which need to be countered by either greater per-capita productivity or some other way of creating more minds (most saliently via AGI, though maybe you can also do artificial wombs or something)
  4. Regression makes humanity more vulnerable to both exogenous risks (supervolcanoes etc) and endogenous risks (superweapon wars, mass ideological capture by suicidal memes, fertility crisis)
    1. (relatedly, subarguments here about why rebuilding is not guaranteed)
  5. Therefore we need to advance society (technologically, economically, maybe in other ways) to survive, at least until the point where we have a stable society that doesn't need to keep advancing to stay safe.

This set of arguments only establishes that undifferentiated progress is better than no progress. They do not by themselves directly argue against differential technological progress. However, people who ~roughly believe the above set of arguments will probably say that differential technological progress (or differential progress in general) is a good idea in theory but not realistic except for a handful of exceptions (like banning certain forms of gain-of-function research in virology). For example, they might point to Hayekian difficulties with central planning and argue that in many ways differential technological progress is even more difficult than the traditional issues that attempted central planners have faced. 

On balance, I did not find their arguments overall convincing but it's pretty subtle and I don't currently think the case against "undifferentiated progress is exceptionally good" is a slam dunk[1], like I did a few years ago. 

  1. ^

    In absolute terms. I think there's a stronger case that marginal work on preventing x-risk is much more valuable in expectation. Though that case is not extremely robust because of similar sign-error issues that plague attempted x-risk prevention work. 

This sounds awesome at first blush, would love to see it battle-tested.

I edited my comment for clarity.

The recently released 2024 Republican platform said they'll repeal the recent White House Executive Order on AI, which many in this community thought is a necessary first step to make future AI progress more safe/secure. This seems bad.

Artificial Intelligence (AI) We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.

From https://s3.documentcloud.org/documents/24795758/read-the-2024-republican-party-platform.pdf, see bottom of pg 9.

Load more