Zach Stein-Perlman

Looking for new projects
4877 karmaJoined Working (0-5 years)Berkeley, CA, USA

Bio

Participation
1

AI strategy & governance. ailabwatch.org.

Comments
444

Topic contributions
1

I am surprised that you don't understand Eliezer's comments in this thread. I claim you'd do better to donate $X to PauseAI now than lock up $2X which you will never see again (plus lock up more for overcollateralization) in order to get $X to PauseAI now.

For anyone who wants to bet on doom:

  • I claim it can’t possibly be good for you
    • Unless you plan to spend all of your money before you would owe money back
      • People seem to think what matters is ∫bankroll when what actually matters is ∫consumption?
    • Or unless you're betting on high rates of returns to capital, not really on doom
  • Good news: you can probably borrow cheaply. E.g. if you have $2X in investments, you can sell them, invest $X at 2X leverage, and effectively borrow the other $X.

Greg made a bad bet. He could do strictly better, by his lights, by borrowing 10K, giving it to PauseAI, and paying back ~15K (10K + high interest) in 4 years. (Or he could just donate 10K to PauseAI. If he's unable to do this, Vasco should worry about Greg's liquidity in 4 years.) Or he could have gotten a better deal by betting with someone else; if there was a market for this bet, I claim the market price would be substantially more favorable to Greg than paying back 200% (plus inflation) over <4 years.

[Edit: the market for this bet is, like, the market for 4-year personal loans.]

Yes, I've previously made some folks at Anthropic aware of these concerns, e.g. associated with this post.

In response to this post, Zac Hatfield-Dodds told me he expects Anthropic will publish more information about its governance in the future.

I claim that public information is very consistent with the investors hold an axe over the Trust; maybe the Trust will cause the Board to be slightly better or the investors will abrogate the Trust or the Trustees will loudly resign at some point; regardless, the Trust is very subordinate to the investors and won't be able to do much.

And if so, I think it's reasonable to describe the Trust as "maybe powerless."

Maybe. Note that they sometimes brag about how independent the Trust is and how some investors dislike it, e.g. Dario:

Every traditional investor who invests in Anthropic looks at this. Some of them are just like, whatever, you run your company how you want. Some of them are like, oh my god, this body of random people could move Anthropic in a direction that's totally contrary to shareholder value.

And I've never heard someone from Anthropic suggest this.

I agree such commitments are worth noticing and I hope OpenAI and other labs make such commitments in the future. But this commitment is not huge: it's just "20% of the compute we've secured to date" (in July 2023), to be used "over the next four years." It's unclear how much compute this is, and with compute use increasing exponentially it may be quite little in 2027. Possibly you have private information but based on public information the minimum consistent with the commitment is quite little.

It would be great if OpenAI or others committed 20% of their compute to safety! Even 5% would be nice.

In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute.

I suspect Politico hallucinated this / there was a game-of-telephone phenomenon. I haven't seen a good source on this commitment. (But I also haven't heard people at labs say "there was no such commitment.")

Load more