3934 karmaJoined Dec 2017Working (0-5 years)Arlington, VA 22202, USA


I am a lawyer and policy researcher interested in improving the governance of artificial intelligence. I currently work as Director of Research at the Institute for Law & AI. I previously worked in various legal and policy roles at OpenAI.

I am also a Research Affiliate with the Centre for the Governance of AI and a VP at the O’Keefe Family Foundation.

My research focuses on the law, policy, and governance of advanced artificial intelligence.

You can share anonymous feedback with me here.


Law-Following AI
AI Benefits


Topic contributions


I am not under any non-disparagement obligations to OpenAI.

It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer.

I have no further comments at this time.

I'm sorry for not getting around to responding to this, and may not be able to for some time. But I wanted to quickly let you know that I appreciated both this comment and your post, and both updated me significantly toward your position and away from my Reason 4.

Do you have specific examples of proposals you think have been too far outside the window?

I realize that the idea of cloud labs is not new. I just think that this particular quote is so obviously scary that it could be rhetorically useful.

Quote from VC Josh Wolfe:

Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios.

There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that.

And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money.



OP gave some reasoning for their views on their recent blog post:

Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.

We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private facilities. This was pitched to us at a time when FTX was making huge commitments to the GCR community, which made resources appear more abundant and lowered our own bar. Since its purchase, the space has gotten meaningful use for community events and gatherings. But with the collapse of FTX, our bar for this kind of work rose, and the original grant would no longer have risen to the level where we would want to provide funding.

Because this was a large asset, we agreed with Effective Ventures ahead of time that we would ask them to sell the Abbey if the event space, all things considered, turned out not to be sufficiently cost-effective. We recently made that request; funds from the sale will be distributed to other valuable projects they run.

While this grant retroactively came in below our new bar, I don’t think that alone is a big problem. If you didn’t make some grants that look less attractive when the expected funding drops by half, you weren’t spending aggressively enough before.

But I still think I personally made a mistake in not objecting to this grant back when the initial decision was made and I was co-CEO. My assessment then was that this wasn’t a major risk to Open Philanthropy institutionally, so it wasn’t my place to try to stop it. I missed how something that could be parodied as an “effective altruist castle” would become a symbol of EA hypocrisy and self-servingness, causing reputational harm to many people and organizations who had nothing to do with the decision or the building.

This is a tough balance to strike because I think it’s easy for organizations to be paralyzed by concerns over reputational risk, rendering them unable to make nearly any decisions. And I think a core part of our hits-based giving philosophy is being able to make major bets that can fail outright, even in embarrassing ways. I want to maintain that openness to risk when the upside justifies it. But this example has made me want to raise our bar for things that could end up looking profligate or irresponsible to the detriment of broader communities we’re associated with.

How does AMF collect feedback from the end-recipients of bednets? How does feedback from them inform AMF's programming?

Do you have any citations for this claim?

According to the book Bullies and Saints: An Honest Look at the Good and Evil of Christian History, some early Christians sold themselves into slavery so they could donate the proceeds to the poor. Super interesting example of extreme and early ETG.

(I'm listening on audiobook so I don't have the precise page for this claim.)

(To avoid bad-faith misinterpretation: I obviously think that nobody should do the same.)

Load more