We just published an interview: Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui). Listen on Spotify, watch on Youtube, or click through for other audio options, the transcript, and related links.
Episode summary
…if the judge thinks that the attorney general is not acting for some political reason, and they really should be, she could appoint a ‘special interest party’…. That’s the court saying, “I’m not seeing the public’s interest sufficiently protected here.” — Rose Chan Loui |
When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.
As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)
And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”
But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.
And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.
This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.
And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.
Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.
This episode was originally recorded on March 6, 2025.
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore
I would like to see more developed thinking in EA circles about what a potential and plausible remedy is if Musk prevails here. The possibility of "some kind of middle ground here" was discussed on the podcast, and I'd keep those kinds of outcomes in mind if Musk were to prevail at trial.
In @Garrison's helpful writeup, he observes that:
And I would guess that's going to be a key element of OpenAI's argument at trial. They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn't viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn't seem necessarily unreasonable under general charitable-law principles to me. The district court didn't need to go there at this point given that the existence of an actual contract or charitable trust between the parties is a threshold issue, and I am not seeing much on this point in the court's order.
To me, this is not only a defense for OpenAI but is also intertwined with the question of remedy. A permanent injunction is not awarded to a prevailing party as a matter of right. Rather:
According to well-established principles of equity, a plaintiff seeking a permanent injunction must satisfy a four-factor test before a court may grant such relief. A plaintiff must demonstrate: (1) that it has suffered an irreparable injury; (2) that remedies available at law, such as monetary damages, are inadequate to compensate for that injury; (3) that, considering the balance of hardships between the plaintiff and defendant, a remedy in equity is warranted; and (4) that the public interest would not be disserved by a permanent injunction.
eBay Inc. v. MercExchange, L.L.C., 547 U.S. 388 (2006) (U.S. Supreme Court decision).
The district court's discussion of the balance of equities focuses on the fact that "Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves." It's not hard to see how an injunction against payola for insiders would meet traditional equitable criteria.
But an injunction that could pose a significant existential risk to OpenAI's viability could run into some serious problems on prong four. It's not likely that the district court would conclude the public interest affirmatively favors Meta, Google, xAI, or the like reaching AGI first as opposed to OpenAI. There is a national-security angle to the extent that the requested injunction might increase the risk of another country reaching AGI first. And to the extent that the cash from selling off OpenAI control would be going to charitable ends rather than lining Altman's pockets, it's going to be hard to argue that OpenAI's board has a fiduciary duty to just shut it all down and vanish ~$100B in charitable assets into thin air.
And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.
If OpenAI is practically enjoined from raising enough capital needed to achieve its goals, the usual responsible thing for a charity that can no longer effectively function is to sell off its assets and distribute the proceeds to other non-profits. Think about a non-profit set up to run a small rural hospital that is no longer viable on its own. It might prefer to merge with another non-profit, but selling the whole hospital to a for-profit chain is usually the next-best option, with selling the land and equipment as a backup option. In a certain light, how different might a sale be from what OpenAI is proposing to do? I'd want to think more about that . . .
With Musk as plaintiff, there are also some potential concerns on prong three relating to laches (the idea that Musk slept on his rights and prejudiced OpenAI-related parties as a result). Although I'm not sure if the interests of OpenAI investors and employees (who are not Altman and Brockman) with equity-like interests would be analyzed under prong three or four, it does seem that he sat around without asserting his rights while others invested cash and/or sweat equity into OpenAI. In contrast, "[t]he general principle is, that laches is not imputable to the government . . . ." United States v. Kirkpatrick, 22 U.S. (9 Wheat) 720, 735 (1824). I predict that any relief granted to Musk will need to take account of these third-party interests, especially because they were invested in while Musk slept on his rights. The avoidance of a laches argument is another advantage of a governmental litigant as opposed to Musk (although the third-party interests would still have to be considered).
All that is to say that -- while "this is really important and what OpenAI wants is bad" may be an adequate public advocacy basis for now, I think there will need to be a judicially and practically viable plan for what appropriate relief looks like at some point. Neither side in the litigation would be a credible messenger on this point, as OpenAI is compromised and its competitor Musk would like to pick off assets for his own profit and power-seeking purposes. I think that's one of the places where savvy non-party advocacy could make a difference.
Would people rather see OpenAI sold off to whatever non-insider bidder the board determines would be best, possibly with some judicial veto of a particularly bad choice? Would people prefer that a transition of some sort go forward, subject to imposition of some sort of hobbles that would slow OpenAI down and require some safety and ethics safeguards? These are the sorts of questions on which I think a court would be more likely to defer to the United States as an amicus and/or to the state AGs, and would be more likely to listen to subject-matter experts and advocacy groups who sought amicus status.
I'm confused about this line of argument. Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
I tried to find the original mission statement. Is the following correct?
If so, I can see how an OpenAI plantiff can try to argue that "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole" necessitates them "winning the AI arms race", but I don't exactly see why an impartial observer should grant them that.
Thanks for sharing this, very informative and helpful for highlighting a potential leverage point., strong upvoted.
One minor point of disagreement: I think you are being a bit too pessimistic here:
There are few examples of US courts blowing up large US corporations, but that is not exactly the situation here. OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. There is a long history of businesses exaggerating the harm from new regulations, claiming they will be ruinous when actually human ingenuity and entrepreneurship render them merely disadvantageous. The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue.
I think a closer example might be when the DC District Court sided with the FTC and blocked the Staples-Office Depot merger on somewhat dubious grounds. The court didn't directly implode a massive retailer... but Staples did enter administration shortly afterwards, and my impression at the time was the causal link was pretty clear.
If you're in EA in California or Delaware and believe OpenAI has a significant chance of achieving AGI first and there being a takeoff, it's probably time-effective to write a letter to your AG encouraging them to pursue action against OpenAI. OpenAI's nonprofit structure isn't perfect, but it's infinitely better than a purely private company would be.
Thanks, I wrote a letter to my California AG because of this comment. See here for a workflow someone else made to write a letter to the California or Delaware AG. My letter is here if anyone wants to take a look for inspiration.
For an AG, should you handwrite the letter, like with congressmember offices, or type and print it like with normal legal work?
Congressional offices often ignore typed letters because they've learned years ago that some people set up mills that mass produce cookie-cutter letters mimicking civic engagement, instead of the legitimate paradigm of reaching out to interested people and convincing them to write their own letters, and see handwritten letters to be a costly signal that a large number of highly engaged people are involved; but if attorney generals aren't in an equilibria like that then they'd probably prefer typed and printed letters.
Would be really curious to see what evidence you're looking at re: congressional offices ignoring typed letters. The last thing I saw on this showed individualized emails slightly outperforming individualized hand-written letters, but both far outperforming form-based emails, probably for reasons you mention (from this post).
Also, I spent some time looking for grassroots campaigns to state AG offices earlier this year and found ~none, so I think there's a good chance the novelty of any grassroots outreach might be more impactful than it is for congressional offices. That's pure speclation on my part though.
Christ, why isn’t OpenPhil taking any action, even making a comment or filing an amicus curiae?
I certainly hope there’s some legitimate process going on behind the scenes; this seems like an awfully good time to spend whatever social/political/economic/human capital OP leadership wants to say is the binding constraint.
And OP is an independent entity. If the main constraint is “our main funder doesn’t want to pick a fight,” well so be it—I guess Good Ventures won’t sue as a proper donor the way Musk is; OP can still submit some sort of non-litigant comment. Naively, at least, that could weigh non trivially on a judge/AG
[warning: speculative]
As potential plaintiff: I get the sense that OP & GV are more professionally run than Elon Musk's charitable efforts. When handing out this kind of money for this kind of project, I'd normally expect them to have negotiated terms with the grantee and memoralized them in a grant agreement. There's a good chance that agreement would have a merger clause, which confirms that (e.g.) there are no oral agreements or side agreements. Attorneys regularly use these clauses to prevent either side from getting out of or going beyond the negotiated final written agreement. Even if there isn't a merger clause, the presence of a comprehensive grant agreement would likely make it harder for the donor to show that a trust had been created, that the donor had a reversionary interest, or so on if the agreement didn't say those things.
As potential source of evidence: I'd at least consider the possibility that people associated with OP and/or GV could be witnesses at trial or could provide documentary evidence -- e.g., if there is a dispute over what representations OpenAI was making to major donors to secure funding. That might counsel keeping quiet at this juncture, particularly considering the next point.
As a potential amicus: I expect the court would either reject or ignore an amicus filing at this stage in the process. The court has jurisdiction over a claim by Elon Musk and xAI that OpenAI violated antitrust law, violated a contract or trust with Musk under California charitable law, etc. If OP/GV tried to submit an amicus brief on most of the actually relevant legal issues on a preliminary injunction, the court would likely see this as an improper attempt to effectively buy additional pages of argument for Musk & xAI.[1] To the extent that the amicus brief was about a legally peripheral issue -- like AI as a GCR -- it would likely be read by a law clerk (a bright recent graduate) who would tell the judge something like "This foundation submitted an amicus brief arguing that AI may go rogue and kill us all. Doesn't seem relevant to the issues in this case."
Note that I think there is a potential role for amici later in this case, but the preliminary-injunction stage was not it.
See Ryan v. Commodity Futures Trading Com’n, 125 F. 3d 1062, 1063 (7th Cir. 1993) (Posner, C.J., in chambers) ("The vast majority of amicus curiae briefs are filed by allies of litigants and duplicate the arguments made in the litigants’ briefs, in effect merely extending the length of the litigant’s brief. Such amicus briefs should not be allowed. They are an abuse. The term 'amicus curiae' means friend of the court, not friend of a party."). In my experience, these sorts of amicus briefs do have a place when the core legal issue is of broad importance but the litigant lacks either the means or incentive to put forth their best argument.
I agree that most such briefs are often from close ideological allies, but I'm curious about you suggestion that the court would reject them on this ground. Surely all the organizations filing somewhat duplicative amicus curiae briefs all the time do so because they think it is helpful?
That quotation is from an order by then-Chief Judge Posner of the Seventh Circuit denying leave to file an amicus brief on such a basis. Judge Posner was, and the Seventh Circuit is, more of a stickler for this sort of this sort of thing (and both were/are more likely to call lawyers out for not following the rules than other courts). Other courts are less likely to actually kick an amicus brief -- that requires more work than just ignoring it! -- but I think Judge Posner's views would enjoy general support among the federal judiciary.
There's a literature on whether amicus briefs are in general helpful vs. being a waste of money, although it mostly focuses on the Supreme Court (e.g., this article surveys some prior work and reflects interviews with former clerks, but is a bit dated). I don't see an amicus brief on the preliminary injunction here hitting many of the notes the former clerks identified as markers of value in that article. Whether there was a charitable trust between Musk and OpenAI isn't legally esoteric, there's no special perspective the amicus can bring to bear on that question, and so on.
You're right insofar as amicus briefs are common at the Supreme Court level, although they are not that common in the courts of appeals (at least when I clerked) and I think they are even less common at the district court level in comparison to the number of significant cases. So I would not view their relative prevalence at the Supreme Court level as strong information in either direction on how effective an amicus brief might be here.
Judges are busy people; if a would-be amicus seeks to file an unhelpful amicus brief at one stage of the litigation, it's pretty unlikely the judge is going to even touch another brief from that amicus at a later stage. If I were a would-be amicus, I would be inclined to wait until I thought I had something different enough than the parties to say -- or thought that I would be seen as a more credible messenger than the parties on a topic directly relevant to a pending decision -- before using my shot.
I agree this is absurd, this is probably the most obvious action open Phil has not taken. What do they have to lose at this stage by filling a lawsuit or at the very least like you say making an official comment.
Perhaps EAs and EA orgs are just by nature largely allergic to open public conflict even if it has decent potential to do good?
In my view, Musk would make a terrible relator here, and for reasons that have nothing to do with partisan affiliation. He has his own personal interests in hand -- such as his interest in xAI and in being part of a consortium allegedly seeking to purchase OpenAI assets -- and there's a serious conflict between those considerable personal interests and dispassionate advocacy of the public interest.