C

Cullen

4034 karmaJoined Working (0-5 years)Arlington, VA 22202, USA
cullenokeefe.com

Bio

I am a lawyer and policy researcher interested in improving the governance of artificial intelligence. I currently work as Director of Research at the Institute for Law & AI. I previously worked in various legal and policy roles at OpenAI.

I am also a Research Affiliate with the Centre for the Governance of AI and a VP at the O’Keefe Family Foundation.

My research focuses on the law, policy, and governance of advanced artificial intelligence.

You can share anonymous feedback with me here.

Sequences
2

Law-Following AI
AI Benefits

Comments
316

Topic contributions
24

Yeah fair, should have considered that more duh

Example: They crammed three cosmonauts into a capsule initially designed for one person. But due to the size constraints, the cosmonauts couldn't wear proper spacesuits; they had to wear leisure suits!

Pretty wild discussion in this podcast about how aggressively the USSR cut corners on safety in their space program in order to stay ahead of the US. In the author's telling of the history, this was in large part because Khrushchev wanted to rack up as many "firsts" (e.g., first satellite, first woman in space) as possible. This seems like it was most proximately for prestige and propaganda rather than any immediate strategic or technological benefit (though of course the space program did eventually produce such bigger benefits).

Evidence of the following claim for AI: people may not need a reason to cut corners on safety because the material benefits are so high. They may do so just because of the prestige and glory of being first.

https://www.lawfaremedia.org/article/chatter--the-harrowing-history-of-the-soviet-space-program-with-john-strausbaugh

It could be the case that the board would reliably fail in all nearby fact patterns but that market participants simply did not know this, because there were important and durable but unknown facts about e.g. the strength of the MSFT relationship or players' BATNAs.

I agree this is an alternative explanation. But my personal view is also that the common wisdom that it was destined to fail ab initio is incorrect. I don't have much more knowledge than other people do on this point, though.

I think it would be fair to describe some Presidents as being effectively powerless with regard their veto yes, if the other party control a super-majority of the legislature and have good internal discipline.

(Emphasis added.) I think this is the crux of the argument. I agree that the OpenAI board may have been powerless to accomplish a specific result in a specific situation. Similarly, in this hypo, the President may be powerless powerless to accomplish a specific result (vetoing legislation) in a specific situation.

But I think this is very far away from saying a specific institution is "powerless" simpliciter, which is what I disagreed with Zach's headline. (And so similarly would disagree that the President was "powerless" simpliciter in your hypo.)

An institution's powers will almost always be constrained significantly by both law and politics, so showing significant constraints on an institution's ability to act unilaterally is very far from showing it overall completely lacks power.

Cullen
26
10
0
2

I agree this would be appealing to intellectually consistent conservatives, but this seems like a bad meme to be spreading/strengthening for animal welfare. Maybe local activists should feel free to deploy it if they think they can flip some conservative's position, but they will be setting themselves up for charges of hypocrisy if they later want to e.g. ban eggs from caged chickens.

How are you defining "powerless"? See my previous comment: I think the common meaning of "powerless" implies not just significant constraints on power but rather the complete absence thereof.

I would say that the LTBT is powerless iff it can be trivially prevented from accomplishing its primary function—overriding the financial interests of the for-profit Anthropic investors—by those investors, such as with a simple majority (which is the normal standard of corporate control). I think this is very unlikely to be true, p<5%.

I definitely would not say that the OpenAI Board was powerless to remove Sam in general, for the exact reason you say: they had the formal power to do so, but it was politically constrained. That formal power is real and, unless it can be trivially overruled in any instance in which it is exercised for the purpose for which it exists, sufficient to not be "powerless."

It turns out that they were maybe powerless to remove him in that instance and in that way, but I think there are many nearby fact patterns on which the Sam firing could have worked. This is evident from the fact that, in the period of days after November 17, prediction markets gave much less than 90% odds—and for many periods of time much less than 50%—that Sam would shortly come back as CEO.

As an intuition pump: Would we say that the President is powerless just because the other branches of government can constrain her (e.g., through the impeachment power or ability to override her veto) in many cases? I think not.

"Powerless" under its normal meaning is a very high bar, meaning completely lacking power. Taking all of Anthropic's statements as true, I think we have evidence that the LTBT has significant powers (the ability to appoint an increasing number of board members), with unclear but significant legal and (an escalating supermajority requirement) and political constraints on those powers. I think it's good to push for both more transparency on what those constraints are and for more independence. But unless a simple majority of shareholders are able to override the LTBT—which seems to be ruled out by the evidence—I would not describe them as powerless.

I think "powerless" is a huge overstatement of the claims you make in this piece (many of which I agree with). Having powers that are legally and politically constrained is not the same thing as the nonexistence of those powers.

I agree though that additional information about the Trust and its relationship to Anthropic would be very valuable.

Cullen
138
5
0
22
9

I am not under any non-disparagement obligations to OpenAI.

It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer.

I have no further comments at this time.

I'm sorry for not getting around to responding to this, and may not be able to for some time. But I wanted to quickly let you know that I appreciated both this comment and your post, and both updated me significantly toward your position and away from my Reason 4.

Load more