IT

Ian Turner

640 karmaJoined

Comments
161

I think there are a few different sub-questions here, with different answers.

If your question is, "Why doesn't the Gates Foundation fill the entire funding gap for GiveWell's top charities", I think there are a number of reasons why a value-aligned, careful, thoughtful, rational, and well-funded donor might nonetheless choose not to fill the entire gap. Open Philanthropy has written quite a lot about their thoughts here, and most of that would apply to any other funder. @DavidNash's answer seems directed at this sort of question.

If your question is, "Why does Larry Ellison give to the IDF instead of AMF", or "Why does Carl Icahn give to NYC stuff instead of the GFI"... I think the answer is just that billionaires are just regular people, like you and me, in a lot of ways, and in particular that the things that make one a billionaire philanthropist do not really guarantee that one will be an effective philanthropist. Billionaires have the resources to get all the advice they want, but most philanthropy advice is still basically "follow your passion". Most donors don't even say they care about effectiveness, and I doubt the picture is much different with billionaires.

I downvoted this, not because I disagree, but because I ... just think it isn't very good. All the same critiques have been raised elsewhere in a much more thoughtful, well-argued, and cogent way, and often there have been good debates in those other places. Not to pile on to the author, who I'm sure tried his or her best, but the present article seems quite sophomoric to me, as if written my someone who thought of these critiques but didn't spend the time to see if they were actually novel, let alone engage with criticisms that are already out there.

For example:

Thanks for writing this. I imagine it was painful, and I’m grateful.

Would you mind saying a little more about this bit?

had I pushed to ask more stupid questions about cash-benchmarking early, I would have arrived at the conclusion that it has enormous implementation difficulties much faster.

Given how many of the frontier AI labs have an EA-related origin story, I think it's totally plausible that the EA AI xrisk project has been net negative.

Open Philanthropy has significantly cut back its allocation to GiveWell. “In our GHW portfolio, we decided — and announced last year — that we would scale back our donations to GiveWell’s recommendations to $100M/year, the level they were at in 2020

I would also not read too much into GiveWell’s decision to hold onto funds for a year. They do that sometimes when they have an opportunity which they expect will be good, but which hasn’t yet been fully vetted; or if there is an opportunity that isn’t quite ripe yet for some reason. This has as much to do with expectations about next year’s fundraising as it does about todays opportunities.

Something else you might consider is, if you didn’t give to GiveWell, where would you give? And would that other opportunity be better or worse, in expectation?

If the problem is ann employee rebellion, the obvious alternative would be to organize the company in a jurisdiction that allows noncompete agreements?

These things are not generally enforced in court. It’s the threat that has the effect, which means the non-disparagement agreement works even if it’s of questionable enforceability and even if indeed it is never enforced.

@Zvi  has a blog post about all the safety folks leaving OpenAI. It’s not a great picture. 

If Tina were to advertise that 100% of the profits generated by her store were going to a specific charity, in the current economic arrangement, this would not be a real Profit for Good business.

How much does the ability of companies to muddle the water affect your analysis? It seems to me that even today, regular for-profit companies find ways to imply that they are social beneficial, even when the opposite is true.

Oh sure, I'll readily agree that most startups don't have a safety culture. The part I was disagreeing with was this:

I think it’s hard to have a safety-focused culture just by “wanting it” hard enough in the abstract

Regarding finance, I don't think this is about 2008, because there are plenty of trading firms that were careful from the outset that were also founded well before the financial crisis. I do think there is a strong selection effect happening, where we don't really observe the firms that weren't careful (because they blew up eventually, even if they were lucky in the beginning).

How do careful startups happen? Basically I think it just takes safety-minded founders. That's why the quote above didn't seem quite right to me. Why are most startups not safety-minded? Because most founders are not safety-minded, which in turn is probably due in part to a combination of incentives and selection effects.

Load more