Helen Toner: "For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases, outright lying to the board.
At this point, everyone always says, What? Give me some examples. I can't share all the examples, but to give a sense of the thing that I'm talking about, it's things like when ChatGPT came out, November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter.
Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.
On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.
Then a last example that I can share because it's been very widely reported relates to this paper that I wrote, which has been, I think, way overplayed in the press. The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board.
It was another example that just really damaged our ability to trust him, and actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.
There's more individual individual examples. For any individual case, Sam could always come up with some innocuous sounding explanation of why it wasn't a big deal or it was interpreted or whatever.
But the end effect was that after years of this thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. That's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just just helping the CEO to raise more money."
From the interview:
Several sources have suggested that the ChatGPT release was not expected to be a big deal. Internally, ChatGPT was framed as a “low-key research preview”. From The Atlantic:
If that's true, then perhaps it wasn’t ex ante above the bar to report to the board.
Andrew Mayne points out that “the base model for ChatGPT (GPT 3.5) had been publicly available via the API since March 2022”.
I think what would be more helpful for me is the other things discussed in board meetings. Even if GPT was not expected to be a big deal, if they were (hyperbolic example) for example discussing whether to have a coffee machine at the office, I think not mentioning ChatGPT would be striking. On the other hand, if they only met once a year and only discussed e.g. if they are financially viable or not, then perhaps not mentioning ChatGPT makes more sense. And maybe even this is not enough - it would also be concerning if some board members wanted more info, but did not get it. If a board member requested more info on prod dev and then ChatGPT was not mentioned, this would also look bad. I think the context and the particulars of this particular board is important.
To me, this is the most damning element. This would have had to have taken some amount of active deceit to pull off, indeed, the current website reads:
In particular, this revelation makes it look like the main reason the fund was started wasn't to create a developer ecosystem around OpenAI (as claimed), but to personally tie Sam to the success of OpenAI.
(The reason this is a serious problem is that having a serious financial stake in the success of AI technology disincentivises you to care about the negative externalities of increasing that business, which is what OpenAI's byzantine structure was designed to accomplish)
Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.
The key passages:
For context:
On (1): it's very unclear how ownership could be compatible with no financial interest.
Maaaaaybe (2) explains it. That is: while ownership does legally entail financial interest, it was agreed that this was only a pragmatic stopgap measure, such that in practice Sam had no financial interest.
Thanks for the context!
Obvious flag that this still seems very sketchy. "the easiest way to do that due to our structure was to put it in Sam's name"? Given all the red flags that this drew, both publicly and within the board, it seems hard for me to believe that this was done solely "to make things go quickly and smoothly."
I remember Sam Bankman-Fried used a similar argument around registering Alameda - in that case, I believe it later led to him later having a lot more power because of it.
Sam has publicly said he has no equity in OpenAI. I've not been able to find public quotes where Sam says he has no financial interest in OpenAI (does anyone have a link?).
It would be hard to imagine he has no interest, I would say even a simple bonus scheme whether stock, options, cash, etc. would count as "interest". If company makes money then so does he.
He said this during that initial Senate hearing iirc, and I think he was saying this line frequently around then (I recall a few other instances but don't remember where).
Oh also fwiw, I believe this was relevant because the OpenAI nonprofit board was required (by its structure) to have a majority of board members without financial interest in the for-profit. Sam was working towards having majority control of the board, which would have been much harder if he couldn't be on it.
The stated behaviour sounds like grounds for
If none of that worked, they could publicly call for his resignation and if he didn't give it, then make the difficult decision of whether to oust him on nonspecific grounds or collectively resign as the board.
Choosing instead to fire him to the complete shock of other employees and the world at large still seems like such a deeply counterproductive path that it inclines me towards scepticism of her subsequent justification and toward the interpretation of bad faith Peter presented in this comment.