It seems that EA tried to "play politics" with Sam Altman and OpenAI, by doing things like backing him with EA money and credibility (in exchange for a board seat) without first having high justifiable trust in him, generally refraining from publicly (or even privately, from what I can gather) criticizing Sam and OpenAI, Helen Toner apologizing to Sam/OpenAI for expressing even mild criticism in an academic paper, springing a surprise attack or counterattack on Sam by firing him without giving any warning or chance to justify himself.
I wonder how much of this course of action was intended / carefully considered, and whether/what parts people still endorse in retrospect. Or more generally, what lessons are people drawing from this whole episode?
I'm personally unsure whether to update in the direction of "play politics harder/better" or "play politics less and be principled more" or maybe "generally be more principled but play politics better when you have to"? Or even "EA had a pretty weak hand throughout and played it as well as can be reasonably expected"? (It sucks that insiders who can best answer these questions are choosing or committed to not talking.)
EDIT: this is going a bit viral, and it seems like many of the readers have missed key parts of the reporting. I wrote this as a reply to Wei Dai and a high-level summary for people who were already familiar with the details; I didn't write this for people who were unfamiliar, and I'm not going to reference every single claim in it, as I have generally referenced them in my prior comments/tweets and explained the details & inferences there. If you are unaware of aspects like 'Altman was trying to get Toner fired' or pushing out Hoffman or how Slack was involved in Sutskever's flip or why Sutskever flip-flopped back, still think Q* matters, haven't noticed the emphasis put on the promised independent report, haven't read the old NYer Altman profile or Labenz's redteam experience etc., it may be helpful to catch up by looking at other sources; my comments have been primarily on LW since I'm not a heavy EAF user, plus my usual excerpts.
Or even "EA had a pretty weak hand throughout and played it as well as can be reasonably expected"?
It was a pretty weak hand. There is this pervasive attitude that Sam Altman could have been dispensed with easily by the OA Board if it had been more competent; this strange implicit assumption that Altman is some johnny-come-lately where the Board screwed up by hiring him. Commenters seem to ignore the long history here - if anything, it was he who screwed up by hiring the Board!
Altman co-founded OA. He was the face in initial coverage and 1 of 2 board members (with Musk). He was a major funder of it. Even Elon Musk's main funding of OA was through an Altman vehicle. He kicked out Musk when Musk decided he needed to be in charge of OA. Open Philanthropy (OP) only had that board seat and made a donation because Altman invited them to, and he could personally have covered the $30m or whatever OP donated for the seat; and no one cared or noticed when OP let the arrange lapse after the initial 3 years. (I had to contact OP to confirm this when someone doubted that the seat was no longer controlled by OP.) He thought up, drafted, and oversaw the entire for-profit thing in the first place, including all provisions related to board control. He voted for all the board members, filling it back up from when it was just him (& Greg Brockman at one point IIRC). He then oversaw and drafted all of the contracts with MS and others, while running the for-profit and eschewing equity in the for-profit. He designed the board to be able to fire the CEO because, to quote him, "the board should be able to fire me". He interviewed every person OA hired, and used his networks to recruit for OA. And so on and so forth
Credit where credit is due - Altman may not have believed the scaling hypothesis like Dario Amodei, may not have invented PPO like John Schulman, may not have worked on DL from the start like Ilya Sutskever, may not have created GPT like Alec Radford, may not have written & optimized any code like Brockman's - but the 2023 OA organization is fundamentally his work.
The question isn't, "how could EAers* have ever let Altman take over OA and possibly kick them out", but entirely the opposite: "how did EAers ever get any control of OA, such that they could even possibly kick out Altman?" Why was this even a thing given that OA was, to such an extent, an Altman creation?
The answer is: "because he gave it to them." Altman freely and voluntarily handed it over to them.
So you have an answer right there to why the Board was willing to assume Altman's good faith for so long, despite everyone clamoring to explain how (in hindsight) it was so obvious that the Board should always have been at war with Altman and regarding him as an evil schemer out to get them. But that's an insane way for them to think! Why would he undermine the Board or try to take it over, when he was the Board at one point, and when he made and designed it in the first place? Why would he be money-hungry when he refused all the equity that he could so easily have taken - and in fact, various partner organizations wanted him to have in order to ensure he had 'skin in the game'? Why would he go out of his way to make the double non-profit with such onerous & unprecedented terms for any investors, which caused a lot of difficulties in getting investment and Microsoft had to think seriously about, if he just didn't genuinely care or believe any of that? Why any of this?
(None of that was a requirement, or even that useful to OA for-profit. Other double systems like Mozilla or Hershey don't have such terms, they're just normal corporations with a lot of shares owned by a non-profit, is all. OA for-profit could've been the same way. Certainly, if all of this was for PR reasons or some insidious decade-long scheme of Altman to 'greenwash' OA, it was a spectacular failure - nothing has occasioned more confusion and bad PR for OA than the double structure or capped-profit. See, for example, my shortly-before-the-firing Twitter argument with well-known AI researcher Delip Rao who repeatedly stated & doubled down on the claim that the OA non-profit legally owned the OA for-profit was not just factually wrong but misinformation. He helpfully linked to a page about political misinformation & propaganda campaigns online in case I had any doubt about what the term 'misinformation' meant.)
What happened is, broadly: 'Altman made the OA non/for-profits and gifted most of it to EA with the best of intentions, but then it went so well & was going to make so much money that he had giver's remorse, changed his mind, and tried to quietly take it back; but he had to do it by hook or by crook, because the legal terms said clearly "no takesie backsies"'. Altman was all for EA and AI safety and an all-powerful nonprofit board being able to fire him, and was sincere about all that, until OA & the scaling hypothesis succeeded beyond his wildest dreams†, and he discovered it was inconvenient for him and convinced himself that the noble mission now required him to be in absolute control, never mind what restraints on himself he set up years ago - he now understands how well-intentioned but misguided he was and how he should have trusted himself more. (Insert Garfield meme here.)
No wonder the board found it hard to believe! No wonder it took so long to realize Altman had flipped on them, and it seemed Sutskever needed Slack screenshots showing Altman blatantly lying to them about Toner before he finally, reluctantly, flipped. The Altman you need to distrust & assume bad faith of & need to be paranoid about stealing your power is also usually an Altman who never gave you any power in the first place! I'm still kinda baffled by it, personally.
He concealed this change of heart from everyone, including the board, gradually began trying to unwind it, overplayed his hand at one point - and here we are.
So, what could the EA faction of the board have done? ...Not much, really. They only ever had the power that Altman gave them in the first place.
* I don't really agree with this framing of Sutskever/Toner/McCauley/D'Angelo as "EA", but for the sake of argument, I'll go with this labeling.
† Please try to cast your mind back to when Altman et al would be planning all this in 2018-2019, with OA rapidly running out of cash after the mercurial Musk's unexpected-but-inevitable betrayal, its DRL projects like OA5 remarkable research successes but commercially worthless, and just some interesting results like GPT-1 and then GPT-2-small coming out of their unsupervised learning backwater from Alec Radford tinkering around with RNNs and then these new 'Transformer' things. The idea that OA might somehow be worth over ninety billion dollars, yes, that's 'billion' with a 'b' in scarcely 3 years would have been insane, absolutely insane, not a single person in the AI world would have taken you seriously if you had suggested that and if you emailed any of them asking about how plausible that was, they would have added an email filter to send your future emails to the trash bin. It is very easy to be sincerely full of the best intentions and discuss how to structure your double corporation to deal with windfalls like growing to a $1000b market cap when no one really expects that to happen, certainly not in the immediate future... Thus, no one is also sitting around going, 'well wait, we required the board to not own equity, but if the company is worth even fraction of our long-term targets, and it's recruiting with stock options like usual, then each employee is going to have, like, $10m or even $100m of pseudo-equity in the OA for-profit. That seems... problematic. Do we need to do something about it?'
The Altman you need to distrust & assume bad faith of & need to be paranoid about stealing your power is also usually an Altman who never gave you any power in the first place! I’m still kinda baffled by it, personally.
Two explanations come to my mind:
Past Sam Altman didn't trust his future self, and wanted to use the OpenAI governance structure to constrain himself.
His status game / reward gradient changed (at least subjectively from his perspective). At the time it was higher status to give EA more power / appear more safety-conscious, and now it's higher status to take it back / race faster for AGI. (I note there was internal OpenAI discussion about wanting to disassociate with EA after the FTX debacle.)
Both of reasons these probably played some causal role in what happened, but may well have been subconscious considerations. (Also entirely possible that he changed his mind in part for what we'd consider fair reasons.)
So, what could the EA faction of the board have done? …Not much, really. They only ever had the power that Altman gave them in the first place.
Some ideas for what they could have done:
Reasoned about why Altman gave them power in the first place. Maybe come up with hypotheses 1 and 2 above (or others) earlier in the course of events. Try to test these hypotheses when possible and use them to inform decision making.
If they thought 1 was likely, they could have talked to Sam about it explicitly at an early date, asked for more power or failsafes, got more/better experts (at corporate politics) to advise them, monitored Sam more closely, developed preparations/plans for the possible future fight. Asked Sam to publicly talk about how he didn't trust himself, so that the public would be more sympathetic to the board when the time comes.
If 2 seemed likely, tried to manage Altman's status (or reward in general) gradient better. For example, gave prominent speeches / op-eds highlighting AI x-risk and OpenAI's commitment to safety. Asked/forced Sam to frequently do the same thing. Managed risk better so that FTX didn't happen.
Not back Sam in the first place so they could criticize/constrain him from the outside (e.g. by painting him/OpenAI as insufficiently safety-focused and pushing harder for government regulations). Or made it an explicit and public condition of backing him that EA (including the board members) were allowed to criticize and try to constrain OpenAI, and frequently remind the public of this condition, in part by actually doing this.
Made it OpenAI policy that past and present employees are allowed/encouraged to publicly criticize OpenAI, so that for example the public would be aware of why the previous employee exodus (to Anthropic) happened.
Past Sam Altman didn't trust his future self, and wanted to use the OpenAI governance structure to constrain himself.
His status game / reward gradient changed (at least subjectively from his perspective). At the time it was higher status to give EA more power / appear more safety-conscious, and now it's higher status to take it back / race faster for AGI. (I note there was internal OpenAI discussion about wanting to disassociate with EA after the FTX debacle.)
I think it's reasonable to think that "Constraining Sam in the future" was obviously a highly pareto-efficient deal. EA had every reason to want Sam constrained in the future. Sam had every reason to make that trade, gaining needed power in the short-term, in exchange for more accountability and oversight in the future. This is clearly a sensible trade that actual good guys would make; not "Sam didn't trust his future self" but rather "Sam had every reason to agree to sell off his future autonomy in exchange for cooperation and trust in the near term".
I think "the world changing around Sam and EA, rather than Sam or EA changing" is worth more nuance. I think that, over the last 5 years, the world changed to make groups of humans vastly more vulnerable than before, due to new AI capabilities facilitating general-purpose human manipulation and the world's power players investing in those capabilities. This dramatically increased the risk of outsider third parties creating or exploiting divisions in the AI safety community, to turn people against each other and use the chaos as a ladder. Given that this risk was escalating, then centralizing power was clearly the correct move in response. I'vebeenwarningaboutthisduringthemonths before the OpenAI conflict started, in the preceedingweeks (including the concept of an annual discount rate for each person, based on the risk of that person becoming cognitively compromised and weaponized against the AI safety community), and I even described the risk of one of the big tech companies hijacking Anthropic 5days before Sam Altman was dismissed. I think it's possible that Sam or people in EA also noticed the world rapidly becoming less safe for AI safety orgs, discovering the threat from a different angle than I did.
Will Hurd is plausibly quite concerned about AI Risk[1]. It's hard to know for sure because his campaign website is framed in the language of US-China competition (and has unfortunate-by-my-lights suggestions like "Equip the Military and Intelligence Community with Advanced AI"), but I think a lot of the proposed policies are relevant to AI risk.
Shivon Zilis left OpenAI allegedly because she had Elon Musk's children and this was seen as a COI[2]. To the extent that there's bad blood still between Altman and Musk, if instead of framing the board's decisions are "doomer" vs "non-doomer", we frame it as as "skeptical of giving Sam free rein" and "fine with letting Sam do whatever", there's a reasonable case that Zilis agrees with Musk enough that she would not side with the pro-Sam faction.
(EDIT 2023/12/04: Changed wording to be slightly more precise and slightly less strong) So there's at least some evidence that any or all of Hoffman/Hurd/Zilis (the 3 board members that left recently) would've opposed Sam trying to ouster Toner. Far from certain, but I'd currently say[3] >50% (EDIT: that at least one of them would be opposed). Especially if it turns out that one or all of them were themselves pushed out by Altman and they started sharing notes. Of course, ousting Altman in retaliation is a pretty big move, and the more politically savvy ones might've found a better compromise solution.
His wikipedia page says "On September 20, 2023, Hurd unveiled a detailed plan for how he would regulate AI if elected President, comparing AI to nuclear fusion, and proposing creating a branch of the executive to deal solely and directly on the issue of AI, and proposing strict regulations on civilian AI usage." The last one in particular doesn't sound necessarily conducive to OpenAI/Microsoft's advanced AI ambitions.
Of course, this counterfactual is hard to verify. The Twitter backlash + OAI revolt probably means people would be hesitant to be publicly pro-Toner, now.
I haven't seen any coverage of the double structure or Anthropic exit which suggests that Amodei helped think up or write the double structure. Certainly, the language they use around the Anthropic public benefit corporation indicates they all think, at least post-exit, that the OA double structure was a terrible idea (eg. see the end of this article).
You don't know that. They seem to have often had near majorities, rather than being a token 1 or 2 board members.
By most standards, Karnofsky and Sutskever are 'doomers', and Zillis is likely a 'doomer' too as that is the whole premise of Neuralink and she was a Musk representative (which is why she was pushed out after Musk turned on OA publicly and began active hostilities like breaking contracts with OA). Hoffman's views are hard to characterize, but he doesn't seem to clearly come down as an anti-doomer or to be an Altman loyalist. (Which would be a good enough reason for Altman to push him out; and for a charismatic leader, neutralizing a co-founder is always useful, for the same reason no one would sell life insurance to an Old Bolshevik in Stalinist Russia.)
If I look at the best timeline of the board composition I've seen thus far, at a number of times post-2018, it looks like there was a 'near majority' or even outright majority. For example, 2020-12-31 has either a tie or an outright majority for either side depending on how you assume Sutskever & Hoffman (Sutskever?/Zilis/Karnofsky/D'Angelo/McCauley vs Hoffman? vs Altman/Brockman), and with the 2021-12-31 list the Altman faction needs to pick up every possible vote to match the existing 5 'EA' faction (Zilis/Karnofsky/D'Angelo/McCauley/Toner vs Hurd?/Sutskever?/Hoffman? vs Brockman/Altman) although this has to be wrong because the board maxes out at 7 according to the bylaws so it's unclear how exactly the plausible majorities evolved over time.
I am reasonably confident Helen replaced Holden as a board member, so I don't think your 2021-12-31 list is accurate. Maybe there was a very short period where they were both on the board, but I heard the intention was for Helen to replace Holden.
The main thing that I doubt is that Sam knew at the time that he was gifting the board to doomers. Ilya was a loyalist and non-doomer when appointed. Elon was I guess some mix of doomer and loyalist at the start. Given how AIS worries generally increased in SV circles over time, more likely than not some of D'Angelo, Hoffman, and Hurd moved toward the "doomer" pole over time.
Ilya has always been a doomer AFAICT, he was just loyal to Altman personally, who recruited him to OA. (I can tell you that when I spent a few hours chatting with him in... 2017 or something? a very long time ago, anyway - I don't remember him dismissing the dangers or being pollyannaish.) 'Superalignment' didn't come out of nowhere or surprise anyone about Ilya being in charge. Elon was... not loyal to Altman but appeared content to largely leave oversight of OA to Altman until he had one of his characteristic mood changes, got frustrated and tried to take over. In any case, he surely counts as a doomer by the time Zilis is being added to the board as his proxy. D'Angelo likewise seems to have consistently, in his few public quotes, been concerned about the danger.
A lot of people have indeed moved towards the 'doomer' pole but much of that has been timelines: AI doom in 2060 looks and feels a lot different from AI doom in 2027.
Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn't seem sensitive to safety worries. I also thought it was "common knowledge" that his interest in safety increased substantially between 2018-22, and that's why I was unsurprised to see him in charge of superalignment.
Re Elon-Zillis, all I'm saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.
You may well be right about D'Angelo and the others.
Hm, maybe it was common knowledge in some areas? I just always took him for being concerned. There's not really any contradiction between being excited about your short-term work and worried about long-term risks. Fooling yourself about your current idea is an important skill for a researcher. (You ever hear the joke about Geoff Hinton? He suddenly solves how the brain works, at long last, and euphorically tells his daughter; she replies: "Oh Dad - not again!")
Just judging from his Twitter feed, I got the weak impression D'Angelo is somewhat enthusiastic about AI and didn't catch any concerns about existential safety.
Another possibility is that Sam came to see EA as an incredibly flawed movement, to the point where he wanted EAs like Toner off his board, and just hasn't elaborated the details of his view publicly. See these tweets from 2022 for example.
I think Sam is corrupted by self-interest and that's the primary explanation here, but I actually agree that EA is pretty flawed. (Better than the competition, but still pretty flawed.) As a specific issue OpenAI might have with EA, I notice that EA seems significantly more interested in condemning OpenAI publicly than critiquing the technical details of their alignment plans. It seems like EAs historically either want to suck up to OpenAI or condemn them, without a lot of detailed technical engagement in between.
I checked the WSJ article linked to in this excerpt and I checked your comments on LessWrong, but I couldn't find any mention of Ilya Sutskever seeing Slack screenshots that showed Sam Altman lying. Would you mind clarifying?
Please forgive me if you're already covered this elsewhere.
[Edit: I no longer feel confident about this comment; see thread right below]
Hm, I don't think Altman looks good here either.
We have to wait and see how public or the media will react, but to me, this looks like it casts doubts on some things he said previously about his specific motivations for building AGI. It's hard to square that working under Microsoft's leadership (and their need to compete with other companies like Alphabet) is a good environment for making AGI breakthroughs and thinking that it'll likely go really well.
Although, maybe he's planning to only build specific apps for Microsoft rather than intends to build AGI there? That would seem like an atypical reduction of ambition/scope to me. Or maybe the plan is "amass more money and talent and then go back to OpenAI if possible, or otherwise start a new AGI thing with more independence from profit-driven structure. That would be more more understandable, but also feels like he'd be being very agentic about this goal in a way that's scary and like I have to trust this one person's judgment about pulling the brakes when it becomes necessary, even though there's now evidence that many people think he's not been cautious enough already recently.
Hm, very good point! I now think that could be his most immediate motivation. Would feel sad to build something and then see it implode (and also the team to be left in limbo). On reflection, that makes me think maybe Sam doesn't necessarily look that bad here. I'm sure Microsoft tried to use their leverage to push for changes, and the OpenAI board stood its ground, so it couldn't have been easy to find a solution that isn't the company falling apart over disagreements and stuff.
Oh yes, that is weird. The impression I had was that Ilya might even have been behind Sam's ousting (based on rumours from the internet). I also understood that sacking Sam need 4 out of 6 board members, and since two of the board members were Sam A and Greg B, that meant everyone else had to have voted for him to leave, including Ilya. Most confusing.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
Either he wasn't behind the push, or he was but subsequently decided it was a huge mistake.
I think the point that Toner & McCauley are conflicted because of OpenPhil/Holden's connections to Anthropic is a pretty weak argument. But the facts are all verified & pretty basic.
A number of things stand out:
lots of turnover
some turnover for unclear reasons
Adam D'Angelo has a clear conflict of interest
I'm also very curious if anyone knows more how McCauley came on the board? And generally more information about her. I hadn't heard of her before and she's apparently an important player now (also in EA as EV UK board member).
Is there still anything EA community can do regarding AGI safety if full scale armed race towards AGI is coming soon with OpenAI almost surely been absorbed by Microsoft?
Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there's a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don't want to rule it out entirely.
There's also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there's a sizable chance we do not get them in time, but I think "have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks" is still important.
(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there's enough time left for the balance of reason to win out, and b) focus on technical projects that don't involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).
My guess is that the letter is largely a bluff. I don’t think these people want to work for Microsoft. I’m surprised Altman decided that was his best move vs starting his own company. Perhaps this implies that starting from scratch is not as easy as we think. Microsoft has the license to most (all?) of OpenAIs tech so they would be able to hit the ground running.
It seems that EA tried to "play politics" with Sam Altman and OpenAI, by doing things like backing him with EA money and credibility (in exchange for a board seat) without first having high justifiable trust in him, generally refraining from publicly (or even privately, from what I can gather) criticizing Sam and OpenAI, Helen Toner apologizing to Sam/OpenAI for expressing even mild criticism in an academic paper, springing a surprise attack or counterattack on Sam by firing him without giving any warning or chance to justify himself.
I wonder how much of this course of action was intended / carefully considered, and whether/what parts people still endorse in retrospect. Or more generally, what lessons are people drawing from this whole episode?
I'm personally unsure whether to update in the direction of "play politics harder/better" or "play politics less and be principled more" or maybe "generally be more principled but play politics better when you have to"? Or even "EA had a pretty weak hand throughout and played it as well as can be reasonably expected"? (It sucks that insiders who can best answer these questions are choosing or committed to not talking.)
EDIT: this is going a bit viral, and it seems like many of the readers have missed key parts of the reporting. I wrote this as a reply to Wei Dai and a high-level summary for people who were already familiar with the details; I didn't write this for people who were unfamiliar, and I'm not going to reference every single claim in it, as I have generally referenced them in my prior comments/tweets and explained the details & inferences there. If you are unaware of aspects like 'Altman was trying to get Toner fired' or pushing out Hoffman or how Slack was involved in Sutskever's flip or why Sutskever flip-flopped back, still think Q* matters, haven't noticed the emphasis put on the promised independent report, haven't read the old NYer Altman profile or Labenz's redteam experience etc., it may be helpful to catch up by looking at other sources; my comments have been primarily on LW since I'm not a heavy EAF user, plus my usual excerpts.
It was a pretty weak hand. There is this pervasive attitude that Sam Altman could have been dispensed with easily by the OA Board if it had been more competent; this strange implicit assumption that Altman is some johnny-come-lately where the Board screwed up by hiring him. Commenters seem to ignore the long history here - if anything, it was he who screwed up by hiring the Board!
Altman co-founded OA. He was the face in initial coverage and 1 of 2 board members (with Musk). He was a major funder of it. Even Elon Musk's main funding of OA was through an Altman vehicle. He kicked out Musk when Musk decided he needed to be in charge of OA. Open Philanthropy (OP) only had that board seat and made a donation because Altman invited them to, and he could personally have covered the $30m or whatever OP donated for the seat; and no one cared or noticed when OP let the arrange lapse after the initial 3 years. (I had to contact OP to confirm this when someone doubted that the seat was no longer controlled by OP.) He thought up, drafted, and oversaw the entire for-profit thing in the first place, including all provisions related to board control. He voted for all the board members, filling it back up from when it was just him (& Greg Brockman at one point IIRC). He then oversaw and drafted all of the contracts with MS and others, while running the for-profit and eschewing equity in the for-profit. He designed the board to be able to fire the CEO because, to quote him, "the board should be able to fire me". He interviewed every person OA hired, and used his networks to recruit for OA. And so on and so forth
Credit where credit is due - Altman may not have believed the scaling hypothesis like Dario Amodei, may not have invented PPO like John Schulman, may not have worked on DL from the start like Ilya Sutskever, may not have created GPT like Alec Radford, may not have written & optimized any code like Brockman's - but the 2023 OA organization is fundamentally his work.
The question isn't, "how could EAers* have ever let Altman take over OA and possibly kick them out", but entirely the opposite: "how did EAers ever get any control of OA, such that they could even possibly kick out Altman?" Why was this even a thing given that OA was, to such an extent, an Altman creation?
The answer is: "because he gave it to them." Altman freely and voluntarily handed it over to them.
So you have an answer right there to why the Board was willing to assume Altman's good faith for so long, despite everyone clamoring to explain how (in hindsight) it was so obvious that the Board should always have been at war with Altman and regarding him as an evil schemer out to get them. But that's an insane way for them to think! Why would he undermine the Board or try to take it over, when he was the Board at one point, and when he made and designed it in the first place? Why would he be money-hungry when he refused all the equity that he could so easily have taken - and in fact, various partner organizations wanted him to have in order to ensure he had 'skin in the game'? Why would he go out of his way to make the double non-profit with such onerous & unprecedented terms for any investors, which caused a lot of difficulties in getting investment and Microsoft had to think seriously about, if he just didn't genuinely care or believe any of that? Why any of this?
(None of that was a requirement, or even that useful to OA for-profit. Other double systems like Mozilla or Hershey don't have such terms, they're just normal corporations with a lot of shares owned by a non-profit, is all. OA for-profit could've been the same way. Certainly, if all of this was for PR reasons or some insidious decade-long scheme of Altman to 'greenwash' OA, it was a spectacular failure - nothing has occasioned more confusion and bad PR for OA than the double structure or capped-profit. See, for example, my shortly-before-the-firing Twitter argument with well-known AI researcher Delip Rao who repeatedly stated & doubled down on the claim that the OA non-profit legally owned the OA for-profit was not just factually wrong but misinformation. He helpfully linked to a page about political misinformation & propaganda campaigns online in case I had any doubt about what the term 'misinformation' meant.)
What happened is, broadly: 'Altman made the OA non/for-profits and gifted most of it to EA with the best of intentions, but then it went so well & was going to make so much money that he had giver's remorse, changed his mind, and tried to quietly take it back; but he had to do it by hook or by crook, because the legal terms said clearly "no takesie backsies"'. Altman was all for EA and AI safety and an all-powerful nonprofit board being able to fire him, and was sincere about all that, until OA & the scaling hypothesis succeeded beyond his wildest dreams†, and he discovered it was inconvenient for him and convinced himself that the noble mission now required him to be in absolute control, never mind what restraints on himself he set up years ago - he now understands how well-intentioned but misguided he was and how he should have trusted himself more. (Insert Garfield meme here.)
No wonder the board found it hard to believe! No wonder it took so long to realize Altman had flipped on them, and it seemed Sutskever needed Slack screenshots showing Altman blatantly lying to them about Toner before he finally, reluctantly, flipped. The Altman you need to distrust & assume bad faith of & need to be paranoid about stealing your power is also usually an Altman who never gave you any power in the first place! I'm still kinda baffled by it, personally.
He concealed this change of heart from everyone, including the board, gradually began trying to unwind it, overplayed his hand at one point - and here we are.
So, what could the EA faction of the board have done? ...Not much, really. They only ever had the power that Altman gave them in the first place.
* I don't really agree with this framing of Sutskever/Toner/McCauley/D'Angelo as "EA", but for the sake of argument, I'll go with this labeling.
† Please try to cast your mind back to when Altman et al would be planning all this in 2018-2019, with OA rapidly running out of cash after the mercurial Musk's unexpected-but-inevitable betrayal, its DRL projects like OA5 remarkable research successes but commercially worthless, and just some interesting results like GPT-1 and then GPT-2-small coming out of their unsupervised learning backwater from Alec Radford tinkering around with RNNs and then these new 'Transformer' things. The idea that OA might somehow be worth over ninety billion dollars, yes, that's 'billion' with a 'b' in scarcely 3 years would have been insane, absolutely insane, not a single person in the AI world would have taken you seriously if you had suggested that and if you emailed any of them asking about how plausible that was, they would have added an email filter to send your future emails to the trash bin. It is very easy to be sincerely full of the best intentions and discuss how to structure your double corporation to deal with windfalls like growing to a $1000b market cap when no one really expects that to happen, certainly not in the immediate future... Thus, no one is also sitting around going, 'well wait, we required the board to not own equity, but if the company is worth even fraction of our long-term targets, and it's recruiting with stock options like usual, then each employee is going to have, like, $10m or even $100m of pseudo-equity in the OA for-profit. That seems... problematic. Do we need to do something about it?'
Thanks, I didn't know some of this history.
Two explanations come to my mind:
Both of reasons these probably played some causal role in what happened, but may well have been subconscious considerations. (Also entirely possible that he changed his mind in part for what we'd consider fair reasons.)
Some ideas for what they could have done:
This is clearly a sensible trade that actual good guys would make; not "Sam didn't trust his future self" but rather "Sam had every reason to agree to sell off his future autonomy in exchange for cooperation and trust in the near term".
This dramatically increased the risk of outsider third parties creating or exploiting divisions in the AI safety community, to turn people against each other and use the chaos as a ladder. Given that this risk was escalating, then centralizing power was clearly the correct move in response.
I've been warning about this during the months before the OpenAI conflict started, in the preceeding weeks (including the concept of an annual discount rate for each person, based on the risk of that person becoming cognitively compromised and weaponized against the AI safety community), and I even described the risk of one of the big tech companies hijacking Anthropic 5 days before Sam Altman was dismissed. I think it's possible that Sam or people in EA also noticed the world rapidly becoming less safe for AI safety orgs, discovering the threat from a different angle than I did.
Nitpicks:
Re 2: It's plausible, but I'm not sure that this is true. Points against:
(EDIT 2023/12/04: Changed wording to be slightly more precise and slightly less strong) So there's at least some evidence that any or all of Hoffman/Hurd/Zilis (the 3 board members that left recently) would've opposed Sam trying to ouster Toner. Far from certain, but I'd currently say[3] >50% (EDIT: that at least one of them would be opposed). Especially if it turns out that one or all of them were themselves pushed out by Altman and they started sharing notes. Of course, ousting Altman in retaliation is a pretty big move, and the more politically savvy ones might've found a better compromise solution.
His wikipedia page says "On September 20, 2023, Hurd unveiled a detailed plan for how he would regulate AI if elected President, comparing AI to nuclear fusion, and proposing creating a branch of the executive to deal solely and directly on the issue of AI, and proposing strict regulations on civilian AI usage." The last one in particular doesn't sound necessarily conducive to OpenAI/Microsoft's advanced AI ambitions.
Convoluted wording because of "executives claimed that they were born via in vitro fertilization (IVF)."
Of course, this counterfactual is hard to verify. The Twitter backlash + OAI revolt probably means people would be hesitant to be publicly pro-Toner, now.
I haven't seen any coverage of the double structure or Anthropic exit which suggests that Amodei helped think up or write the double structure. Certainly, the language they use around the Anthropic public benefit corporation indicates they all think, at least post-exit, that the OA double structure was a terrible idea (eg. see the end of this article).
You don't know that. They seem to have often had near majorities, rather than being a token 1 or 2 board members.
By most standards, Karnofsky and Sutskever are 'doomers', and Zillis is likely a 'doomer' too as that is the whole premise of Neuralink and she was a Musk representative (which is why she was pushed out after Musk turned on OA publicly and began active hostilities like breaking contracts with OA). Hoffman's views are hard to characterize, but he doesn't seem to clearly come down as an anti-doomer or to be an Altman loyalist. (Which would be a good enough reason for Altman to push him out; and for a charismatic leader, neutralizing a co-founder is always useful, for the same reason no one would sell life insurance to an Old Bolshevik in Stalinist Russia.)
If I look at the best timeline of the board composition I've seen thus far, at a number of times post-2018, it looks like there was a 'near majority' or even outright majority. For example, 2020-12-31 has either a tie or an outright majority for either side depending on how you assume Sutskever & Hoffman (Sutskever?/Zilis/Karnofsky/D'Angelo/McCauley vs Hoffman? vs Altman/Brockman), and with the 2021-12-31 list the Altman faction needs to pick up every possible vote to match the existing 5 'EA' faction (Zilis/Karnofsky/D'Angelo/McCauley/Toner vs Hurd?/Sutskever?/Hoffman? vs Brockman/Altman) although this has to be wrong because the board maxes out at 7 according to the bylaws so it's unclear how exactly the plausible majorities evolved over time.
I am reasonably confident Helen replaced Holden as a board member, so I don't think your 2021-12-31 list is accurate. Maybe there was a very short period where they were both on the board, but I heard the intention was for Helen to replace Holden.
Ilya has always been a doomer AFAICT, he was just loyal to Altman personally, who recruited him to OA. (I can tell you that when I spent a few hours chatting with him in... 2017 or something? a very long time ago, anyway - I don't remember him dismissing the dangers or being pollyannaish.) 'Superalignment' didn't come out of nowhere or surprise anyone about Ilya being in charge. Elon was... not loyal to Altman but appeared content to largely leave oversight of OA to Altman until he had one of his characteristic mood changes, got frustrated and tried to take over. In any case, he surely counts as a doomer by the time Zilis is being added to the board as his proxy. D'Angelo likewise seems to have consistently, in his few public quotes, been concerned about the danger.
A lot of people have indeed moved towards the 'doomer' pole but much of that has been timelines: AI doom in 2060 looks and feels a lot different from AI doom in 2027.
Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn't seem sensitive to safety worries. I also thought it was "common knowledge" that his interest in safety increased substantially between 2018-22, and that's why I was unsurprised to see him in charge of superalignment.
Re Elon-Zillis, all I'm saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.
You may well be right about D'Angelo and the others.
Hm, maybe it was common knowledge in some areas? I just always took him for being concerned. There's not really any contradiction between being excited about your short-term work and worried about long-term risks. Fooling yourself about your current idea is an important skill for a researcher. (You ever hear the joke about Geoff Hinton? He suddenly solves how the brain works, at long last, and euphorically tells his daughter; she replies: "Oh Dad - not again!")
Just judging from his Twitter feed, I got the weak impression D'Angelo is somewhat enthusiastic about AI and didn't catch any concerns about existential safety.
Another possibility is that Sam came to see EA as an incredibly flawed movement, to the point where he wanted EAs like Toner off his board, and just hasn't elaborated the details of his view publicly. See these tweets from 2022 for example.
I think Sam is corrupted by self-interest and that's the primary explanation here, but I actually agree that EA is pretty flawed. (Better than the competition, but still pretty flawed.) As a specific issue OpenAI might have with EA, I notice that EA seems significantly more interested in condemning OpenAI publicly than critiquing the technical details of their alignment plans. It seems like EAs historically either want to suck up to OpenAI or condemn them, without a lot of detailed technical engagement in between.
I checked the WSJ article linked to in this excerpt and I checked your comments on LessWrong, but I couldn't find any mention of Ilya Sutskever seeing Slack screenshots that showed Sam Altman lying. Would you mind clarifying?
Please forgive me if you're already covered this elsewhere.
That was covered in my 2023-11-25 comment.
[Edit: I no longer feel confident about this comment; see thread right below]
Hm, I don't think Altman looks good here either.
We have to wait and see how public or the media will react, but to me, this looks like it casts doubts on some things he said previously about his specific motivations for building AGI. It's hard to square that working under Microsoft's leadership (and their need to compete with other companies like Alphabet) is a good environment for making AGI breakthroughs and thinking that it'll likely go really well.
Although, maybe he's planning to only build specific apps for Microsoft rather than intends to build AGI there? That would seem like an atypical reduction of ambition/scope to me. Or maybe the plan is "amass more money and talent and then go back to OpenAI if possible, or otherwise start a new AGI thing with more independence from profit-driven structure. That would be more more understandable, but also feels like he'd be being very agentic about this goal in a way that's scary and like I have to trust this one person's judgment about pulling the brakes when it becomes necessary, even though there's now evidence that many people think he's not been cautious enough already recently.
I guess we have to wait and see.
Perhaps some of his motivation was to keep OpenAI from imploding?
Hm, very good point! I now think that could be his most immediate motivation. Would feel sad to build something and then see it implode (and also the team to be left in limbo). On reflection, that makes me think maybe Sam doesn't necessarily look that bad here. I'm sure Microsoft tried to use their leverage to push for changes, and the OpenAI board stood its ground, so it couldn't have been easy to find a solution that isn't the company falling apart over disagreements and stuff.
One thing I hadn't realised is that Ilya Sutskever signed this open letter as well (and he's on the board!).
Oh yes, that is weird. The impression I had was that Ilya might even have been behind Sam's ousting (based on rumours from the internet). I also understood that sacking Sam need 4 out of 6 board members, and since two of the board members were Sam A and Greg B, that meant everyone else had to have voted for him to leave, including Ilya. Most confusing.
Sutskever appears to have regrets:
https://twitter.com/ilyasut/status/1726590052392956028
How much credibility dose he still have left by backtracking?
It's bizarre isn't it
Very much hoping the board makes public some of the reasons behind the decision.
His recent twitter post:
Either he wasn't behind the push, or he was but subsequently decided it was a huge mistake.
Bizarrely, the OpenAI board proposed a merger with Anthropic.
Well, maybe. https://manifold.markets/DanielFilan/on-feb-1-2024-will-i-believe-that-o?r=RGFuaWVsRmlsYW4
There's a decent history of the board changes in OpenAI here: https://loeber.substack.com/p/a-timeline-of-the-openai-board
I think the point that Toner & McCauley are conflicted because of OpenPhil/Holden's connections to Anthropic is a pretty weak argument. But the facts are all verified & pretty basic.
A number of things stand out:
I'm also very curious if anyone knows more how McCauley came on the board? And generally more information about her. I hadn't heard of her before and she's apparently an important player now (also in EA as EV UK board member).
Is there still anything EA community can do regarding AGI safety if full scale armed race towards AGI is coming soon with OpenAI almost surely been absorbed by Microsoft?
Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there's a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don't want to rule it out entirely.
There's also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there's a sizable chance we do not get them in time, but I think "have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks" is still important.
I think that trying to get safe concrete demonstrations of risk by doing research seems well worth pursuing (I don't think you were saying it's not).
Too soon to tell I think. Probably better to wait for the dust to settle.
(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there's enough time left for the balance of reason to win out, and b) focus on technical projects that don't involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).
My guess is that the letter is largely a bluff. I don’t think these people want to work for Microsoft. I’m surprised Altman decided that was his best move vs starting his own company. Perhaps this implies that starting from scratch is not as easy as we think. Microsoft has the license to most (all?) of OpenAIs tech so they would be able to hit the ground running.
From OpenPhil's $30m to firing Sam, EA helped to create and grow one of the most formidable AI research teams, then handed it over to Clippy!
I think this is an overly reductive view of the situation to be honest