Hide table of contents

A while ago OpenPhil gave a decent sum of money to OpenAI to buy a board seat. Since then various criticisms of OpenAI have been made. Do we know anything about how OpenPhil used its influence via that board seat?

52

0
0

Reactions

0
0
Comments15
Sorted by Click to highlight new comments since:

I'm not sure what can be shared publicly for legal reasons, but would note that it's pretty tough in board dynamics generally to clearly establish counterfactual influence. At a high level, Holden was holding space for safety and governance concerns and encouraging the rest of the leadership to spend time and energy thinking about them.

I believe the implicit premise of the question is something like "do those benefits outweigh the potential harms of the grant." Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise. I've gone back and looked at some of comms around the time (2016) as well as debriefed with Holden and I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway). Another possibility is that the other funders from the first round would have made larger commitments. I give effectively 0% of the probability mass to OpenAI not starting up.

[anonymous]24
3
0

Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise.

I think some people have this misunderstanding, and I think it's useful to address it.

With that in mind, much of the time, I don't think people who are saying "do those benefits outweigh the potential harms" are assuming that the counterfactual was "no OpenAI." I think they're assuming the counterfactual is something like "OpenAI has less money, or has to take somewhat less favorable deals with investors, or has to do something that it thought would be less desirable than 'selling' a board seat to Open Phil."

(I don't consider myself to have strong takes on this debate, and I think there are lots of details I'm missing. I have spoken to some people who seem invested in this debate, though.)

My current ITT of a reasonable person who thinks the harms outweighed the benefits says something like this: "OP's investment seems likely to have accelerated OpenAI's progress and affected the overall rate of AI progress. If OP had not invested, OpenAI likely would have had to do something else that was worse for them (from a fundraising perspective) which could have slowed down OpenAI and thus slowed down overall AI progress."

Perhaps this view is mistaken (e.g., maybe OpenAI would have just fundraised sooner and started the for-profit entity sooner). But (at first glance), giving up a board seat seems pretty costly, which makes me wonder why OpenAI would choose to give up the board seat if they had some less costly alternatives.

(I also find it plausible that the benefits outweighed the costs, though my ITT of a reasonable person on the other side says something like "what were the benefits? Are there any clear wins that are sharable?")

Unless it's a hostile situation (as might happen with public cos/activist investors), I don't think it's actually that costly. At seed stage, it's just kind of normal to give board seats to major “investors”, and you want to have a good relationship with both your major investors and your board.

The attitude Sam had at the time was less "please make this grant so that we don't have to take a bad deal somewhere else, and we're willing to 'sell' you a board seat to close the deal" and more "hey would you like to join in on this? we'd love to have you. no worries if not."

[anonymous]5
0
0

Thanks for this context. Is it reasonable to infer that you think that OpenAI would've got a roughly-equally-desirable investment if OP had not invested? (Such that the OP investment had basically no effect on acceleration?)

Yes that’s my position. My hope is we actually slowed acceleration by participating but I’m quite skeptical of the view that we added to it.

[anonymous]7
0
0

Thanks! I found this context useful. 

I give effectively 0% of the probability mass to OpenAI not starting up.

I think an important question here is whether OpenAI would have reached a critical level of success that is necessary for, say, convincing Microsoft to throw $1B at them—before exhausting their runway—if OpenPhil had not recommended the $30M grant in 2017.

[...] I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway).

This seems reasonable. On the other hand, if OpenAI had not reached a critical level of success as a non-profit org, I think it is not obvious that their for-profit spinoff would have succeeded in raising investments that are sufficient to get them to a critical level of success. They would have probably needed to compete with many other for-profit AI startups for investments.

Why do you believe that’s binary? (Vs just less funding/smaller valuation at the first round)

I think this type of ML research (i.e. trying to train groundbreaking neural networks) is pretty messy and unpredictable; and money and luck are fungible to some extent. It's not like back in 2017 OpenAI's researchers could perfectly predict which ML experiments would succeed, and how to turn $X of GPU hours into an impressive model that would allow them to raise >$X in the next round, with probability 1.

For example, suppose OpenAI's researchers ran some expensive experiment in 2017, and did not get impressive results. They then need to decide whether to give up on that particular approach, or just tweak some hyperparameters and run another such experiment. The amount of remaining funding they have at that point may determine their decision.

Again, why does it have to be X=$1B and probability 1?

It seems like if the $30M mattered, then the counterfactual is that they needed to be able to raise $30M at the end of their runway, at any valuation, rather than $1B, in order to bridge to the more impressive model. There should be a sizeable gap in what constitutes a sufficiently impressive model between those scenarios. In theory they also had "up to $1B" in grants from their original funders, including Elon, that should have been possible to draw on if needed.

How did you come to the conclusion that funding ML research is "pretty messy and unpredictable"? I've seen many ML companies funded over the years as straightforwardly as other tech startups, esp. if they had great professional backgrounds as was clearly the case with OAI. Seems like an unnecessary assumption on top of other unnecessary assumptions.

How did you come to the conclusion that funding ML research is "pretty messy and unpredictable"? I've seen many ML companies funded over the years as straightforwardly as other tech startups, […]

I think it's important to distinguish here between companies that intend to use existing state-of-the-art ML approaches (where the innovation is in the product side of things) and companies that intend to advance the state-of-the-art in ML. I'm only claiming that research that aims to advance the state-of-the-art in ML is messy and unpredictable.

To illustrate my point: If we use an extreme version of the messy-and-unpredictable view, we can imagine that OpenAI's research was like repeatedly drawing balls from an urn, where drawing each ball costs $1M and there is a 1% chance (or whatever) to draw a Winning Ball (that is analogous to getting a super impressive ML model). The more funding OpenAI has the more balls they can draw, and thus the more likely they are to draw a Winning Ball. Giving OpenAI $30M increases their chance to draw a Winning Ball; though that increase must be small if they have access to much more funding than $30M (without a super impressive ML model).

I understood what you meant before, but still see it as a bad analogy.

For context I saw many rounds of funding as a board member at Vicarious which was a pure lab for most of its life (and then later attempted robotics but that small revenue actually devalued it in the eyes of investors). There, what it took was someone getting excited about the story and smaller performance milestones along the way.

[comment deleted]2
0
0

AFAIK this is not something that can be shared publicly. 

Source is that I remember Ajeya mentioning at one point that it led to positive changes and she doesn't think it was a bad decision in retrospect, but cannot get into said changes for NDA reasons. 

Curated and popular this week
Relevant opportunities