Hide table of contents

According to this page, the OpenAI board is as follows:

OpenAI is governed by the board of the OpenAI Nonprofit, currently comprised of Independent Directors Bret Taylor (Chair), Sam Altman, Adam D’Angelo, Dr. Sue Desmond-Hellmann, Retired U.S. Army General Paul M. Nakasone, Nicole Seligman, Fidji Simo, Larry Summers and Zico Kolter.

Who are these people and to what extent are they committed and qualified to ensure the development of safe, beneficial AGI?

14

0
0

Reactions

0
0
New Answer
New Comment
Comments1
Sorted by Click to highlight new comments since:

Probably most people are "committed to safety" in the sense that they wouldn't actively approve conduct at their organization where executives got developers to do things that they themselves presented as reckless. To paint an exaggerated picture, imagine if some executive said the following:

"I might be killing millions of people here if something goes wrong, and I'm not super sure if this will work as intended because the developers flagged significant uncertainties and admitted that they're just trying things out essentially flying blind; still, we won't get anywhere if we don't take risks, so I'm going to give everyone the go-ahead here!"

In that scenario, I think most people would probably object quite strongly.

But as you already suggest, the more relevant question is one of willingness to go the extra mile, and of qualifications: Will the board members care about and be able to gain an informed understanding of the risks of certain training runs, product releases, or model security measures? Alternatively, will they care about (and be good at) figuring out whose opinions and judgment they can trust and defer to on these matters?

With the exodus of many of the people who were concerned about mitigating risks of AI going badly wrong, the remaining culture there will likely be more focused on upsides, on moving fast, on beating competitors, etc. There will likely be fewer alarming disagreements among high-ranking members of the organization (because the ones who had alarming disagreements already left). The new narrative on safety will likely be something like, "We have to address the fact that there was this ideological club of doomer folks who used to work at OpenAI. I think they were well-intentioned (yada yada), but they were wrong because of their ideological biases, and it's quite tragic because the technology isn't actually that risky the way we're currently building it." (This is just my guess; I don't have any direct info on what the culture is now like, so I might be wrong.) 

So, my guess is that the rationales the leadership will present to board members on any given issue will often seem very reasonable ASSUMING that you go into this with the prior of "I trust that leadership has good judgment here." The challenging task for board members will be that they might have to go beyond just looking at things the way they get presented to them and ask questions that the leadership wouldn't even put on the meeting agenda. For instance, they could ask for costly signals of things the organization could do to create a healthy culture for assessing risks and for discussing acceptable risk tradeoffs. (Past events suggest that this sort of culture is less likely to exist than it was before the exodus of people who did safety-themed work.)

To summarize, my hope is for board members to take seriously the following three possibilities: (1) That there might be big risks in the AGI tech tree. (2) That org leadership might not believe in these risks or might downplay them because it's convenient for them that way. (3) That org-internal discussions on risks from AI might appear one-sided because of "evaporative cooling" (most of the people who were more particularly concerned having already left for reasons that weren't related to a person's judgment/forecasting abilities). 

Curated and popular this week
Relevant opportunities