Zach Stein-Perlman

@ ailabwatch.org
4960 karmaJoined Working (0-5 years)Berkeley, CA, USA
ailabwatch.org

Bio

Participation
1

AI strategy & governance. ailabwatch.org.

Comments
456

Topic contributions
1

Thanks.

I notice they have few publications.

Setting aside whether Neil's work is useful, presumably almost all of the grant is for his lab. I failed to find info on his lab.

...huh, I usually disagree with posts like this, but I'm quite surprised by the 2022 and 2023 grants.

Actually, this is a poor description of my reaction to this post. Oops. I should have said:

Digital mind takeoff is maybe-plausibly crucial to how the long-term future goes. But this post seems to focus on short-term stuff such that the considerations it discusses miss the point (according to my normative and empirical beliefs). Like, the y-axis in the graphs is what matters short-term (and it's at most weakly associated with what matters long-term: affecting the von Neumann probe values or similar). And the post is just generally concerned with short-term stuff, e.g. being particularly concerned about "High Maximum Altitude Scenarios": aggregate welfare capacity "at least that of 100 billion humans" "within 50 years of launch." Even ignoring these particular numbers, the post is ultimately concerned with stuff that's a rounding error relative to the cosmic endowment.

I'm much more excited about "AI welfare" work that's about what happens with the cosmic endowment, or at least (1) about stuff directly relevant to that (like the long reflection) or (2) connected to it via explicit heuristics like the cosmic endowment will be used better in expectation if "AI welfare" is more salient when we're reflecting or choosing values or whatever.

The considerations in this post (and most "AI welfare" posts) are not directly important to digital mind value stuff, I think, if digital mind value stuff is dominated by possible superbeneficiaries created by von Neumann probes in the long-term future. (Note: this is a mix of normative and empirical claims.)

(Minor point: in an unstable multipolar world, it's not clear how things get locked in, and for the von Neumann probes in particular, note that if you can launch slightly faster probes a few years later, you can beat rushed-out probes.)

When telling stories like your first paragraph, I wish people either said "almost all of the galaxies we reach are tiled with some flavor of computronium and here's how AI welfare work affected the flavor" or "it is not the case that almost all of the galaxies we reach are tiled with some flavor of computronium and here's why."

The universe will very likely be tiled with some flavor of computronium is a crucial consideration, I think.

Briefly + roughly (not precise):

At some point we'll send out lightspeed probes to tile the universe with some flavor of computronium. The key question (for scope-sensitive altruists) is what that computronium will compute. Will an unwise agent or incoherent egregore answer that question thoughtlessly? I intuit no.

I can't easily make this intuition legible. (So I likely won't reply to messages about this.)

I agree this is possible, and I think a decent fraction of the value of "AI welfare" work comes from stuff like this.

Those humans decide to dictate some or all of what the future looks like, and lots of AIs end up suffering in this future because their welfare isn't considered by the decision makers.

This would be very weird: it requires that either the value-setters are very rushed or that they have lots of time to consult with superintelligent advisors but still make the wrong choice. Both paths seem unlikely.

Among your friends, I agree; among EA Forum users, I disagree.

Load more