Oh also fwiw, I believe this was relevant because the OpenAI nonprofit board was required (by its structure) to have a majority of board members without financial interest in the for-profit. Sam was working towards having majority control of the board, which would have been much harder if he couldn't be on it.
These are good points!
At the time I thought that Nate feeling the need to post and clarify about what actually happened was a pretty strong indication that Sam was using this opportunity to pretend they are on better terms with these folks. (Since I think he otherwise never talks to Nate/Eliezer/MIRI? I could be wrong.)
But yeah it could be that someone who still had influence thought this post was important to run by this set of people. (I consider this less likely.)
I don't think Sam would have barely noticed. It sounds like he was the one who asked for feedback.
In any case this event seems like a minor thing, though imo a helpful part of the gestalt picture.
Fwiw the relationship with Nate seemed mostly that Sam asked for comments, Nate gave some, and there was no back and forth. See Nate's post: https://www.lesswrong.com/posts/uxnjXBwr79uxLkifG/comments-on-openai-s-planning-for-agi-and-beyond
Sam seemed to oversell the relationship with this acknowledgement, so I don't think we should read much into the other names except literally "they were asked to review drafts".
Sam is not pitching special chips for OpenAI here, right?
I do not read safety goals into this project, which sounds more like it's "make there be many more fabs distributed around the world for more chips and decreased centralization". (Which, fwiw, erodes options for containing specialized chips.)
Related: requiring some kind of insurance that pays out when a certificate becomes net-negative.
Suppose we somehow have accurate positive and negative valuations of certificates. We can have insurers sell put options on certificates, and be required to maintain that their portfolio has positive overall impact. (So an insurer needs to buy certificates of positive impact to offset negative impact they've taken on.)
Ultimately what's at stake for the insurer is probably some collateral they've put down, so it's a similar proposal.
Good info there re current employees!
I want to draw attention to the distinction between current and former employees, since OpenAI was deploying their leverage on ex-staff and keeping that quiet amidst current staff. And what confirmation we have is about current Anthropic staff, haven't heard from ex-staff yet.