Zach Stein-Perlman

@ ailabwatch.org
5141 karmaJoined Working (0-5 years)Berkeley, CA, USA
ailabwatch.org

Bio

Participation
1

AI strategy & governance. ailabwatch.org.

Comments
459

Topic contributions
1

"Improve US AI policy 5 percentage points" was defined as

Instead of buying think tanks, this option lets you improve AI policy directly. The distribution of possible US AI policies will go from being centered on the 50th-percentile-good outcome to being centered on the 55th-percentile-good outcome, as per your personal definition of good outcomes. The variance will stay the same.

(This is still poorly defined.)

A few DC and EU people tell me that in private, Anthropic (and others) are more unequivocally antiregulation than their public statements would suggest.

I've tried to get this on the record—person X says that Anthropic said Y at meeting Z, or just Y and Z—but my sources have declined.

I believe that Anthropic's policy advocacy is (1) bad and (2) worse in private than in public.

But Dario and Jack Clark do publicly oppose strong regulation. See https://ailabwatch.org/resources/company-advocacy/#dario-on-in-good-company-podcast and https://ailabwatch.org/resources/company-advocacy/#jack-clark. So this letter isn't surprising or a new betrayal — the issue is the preexisting antiregulation position, insofar as it's unreasonable.

Thanks.

I notice they have few publications.

Setting aside whether Neil's work is useful, presumably almost all of the grant is for his lab. I failed to find info on his lab.

...huh, I usually disagree with posts like this, but I'm quite surprised by the 2022 and 2023 grants.

Actually, this is a poor description of my reaction to this post. Oops. I should have said:

Digital mind takeoff is maybe-plausibly crucial to how the long-term future goes. But this post seems to focus on short-term stuff such that the considerations it discusses miss the point (according to my normative and empirical beliefs). Like, the y-axis in the graphs is what matters short-term (and it's at most weakly associated with what matters long-term: affecting the von Neumann probe values or similar). And the post is just generally concerned with short-term stuff, e.g. being particularly concerned about "High Maximum Altitude Scenarios": aggregate welfare capacity "at least that of 100 billion humans" "within 50 years of launch." Even ignoring these particular numbers, the post is ultimately concerned with stuff that's a rounding error relative to the cosmic endowment.

I'm much more excited about "AI welfare" work that's about what happens with the cosmic endowment, or at least (1) about stuff directly relevant to that (like the long reflection) or (2) connected to it via explicit heuristics like the cosmic endowment will be used better in expectation if "AI welfare" is more salient when we're reflecting or choosing values or whatever.

The considerations in this post (and most "AI welfare" posts) are not directly important to digital mind value stuff, I think, if digital mind value stuff is dominated by possible superbeneficiaries created by von Neumann probes in the long-term future. (Note: this is a mix of normative and empirical claims.)

(Minor point: in an unstable multipolar world, it's not clear how things get locked in, and for the von Neumann probes in particular, note that if you can launch slightly faster probes a few years later, you can beat rushed-out probes.)

When telling stories like your first paragraph, I wish people either said "almost all of the galaxies we reach are tiled with some flavor of computronium and here's how AI welfare work affected the flavor" or "it is not the case that almost all of the galaxies we reach are tiled with some flavor of computronium and here's why."

The universe will very likely be tiled with some flavor of computronium is a crucial consideration, I think.

Load more