T

toonalfrink

839 karmaJoined

Comments
95

Excellent idea. This would also incentivize writing an application that is generally convincing instead of trying to hack the preferences of the specific fund

Yes, I imagine funding diversification would help, though I'm not sure if it would go far enough to make EA a good career bet.

My own solution is to work myself up to the point where I'm financially independent from EA so my agency is not compromised by someone elses model of what works

And you're right that better epistemics might help address the other two problems, but only insofar that these are interventions that are targeted at "s1 epistemics" i.e. the stuff that doesn't necessarily follow from conscious deliberation. Most of the techniques in this category would fall under the banner of spirituality (the pragmatic type without metaphysics). This is something that the rationalist project has not addressed sufficiently. I think there's a lot of unexplored potential there.

Re "epistemics and integrity" - I'm glad to see this problem being described. It's also why I left (angrily!) a few years ago, but I don't think you're really getting to the core of the issue. Let me try to point at a few things

  • centralized control and disbursion of funds, with a lot of discretionary power and a very high and unpredictable bar, gives me no incentive to pursue what I think is best, and all the incentive to just stick to the popular narrative. Indeed groupthink. Except training people not to groupthink isn't going to change their (existential!) incentive to groupthink. People's careers are on the line, there are only a few opportunities for funding, no guarantee to keep receiving it after the first round, and no clear way to pivot into a safer option except to start a new career somewhere your heart does not want to be, having thrown years away

  • lack of respect for "normies". Many EA's seemingly can't stand interacting with non-EA's. I've seen EA meditation, EA bouldering, EA clubbing, EA whatever. Orgs seem to want everyone and the janitor to be "aligned". Everyone's dating each other. It seems that we're even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...

  • perhaps in part due to the above, massive hubris. I don't think we realise how much we don't know. We started off with a few slam dunks (yeah wow 100x more impact than average) and now we seem to think we are better at everything. Clearly the ability to discern good charities does not transfer to the ability to do good management. The truth is: we are attempting something of which we don't even know whether it is possible at all. Of course we're all terrified! But where is the humility that should go along with that?

I did not know this would be public

I just didn't want to waste this money on shrimps

This topic seems even more relevant today compared to 2019 when I wrote it. At EAG London I saw an explosion of initiatives and there is even more money that isn't being spent. I've also seen an increase in attention that EA is giving to this problem, both from the leadership and on the forum. 

Increase fidelity for better delegation

In 2021 I still like to frame this as a principal-agent problem.

First of all there's the risk of goodharting. One prominent grantmaker recounted to me that back when one prominent org was giving out grants, people would just frame what they were doing as EA, and then they would keep doing what they were doing anyway.

This is not actually an unsolved problem if you look elsewhere in the world. Just look at your average company. Surely employees like to sugarcoat their work a bit, but we don't often see a total departure from what their boss wants from them. Why not?

Well I recently applied for funding to the EA meta fund. The project was a bit wacky, so we gave it a 20% chance of being approved. The rejection e-mail contained a whopping ~0.3 bits of information: "No". It's like that popular meme where a guy asks his girlfriend what she wants to eat, makes a lot of guesses, and she just keeps saying "no" without giving him any hints. 

So how are we going to find out what grantmakers want from us, if not by the official route? Perhaps this is why it seems so common for people close to the grantmaker to get funded: they do get to have high-fidelity communication.

If this reads as cynicism, I'm sorry. For all I know, they've got perfect reasons for keeping me guessing. Perhaps they want me to generate a good model by myself, as a proof of competence? There's always a high-trust interpretation and despite everything I insist on mistake theory.

The subscription model

My current boss talks to me for about an hour, about once a month. This is where I tell him how my work is going. If I'm off the rails somehow, this is where he would tell me. If my work was to become a bad investment for him, this is where he would fire me. 

I had a similar experience back when I was doing RAISE. Near the end, there was one person from Berkeley who was funding us. About once a month, for about an hour, we would talk about whether it was a good idea to continue this funding. When he updated away from my project being a good investment, he discontinued it. This finally gave me the high-fidelity information I needed to decide to quit. If not for him, who knows for how much longer I would have continued.

So if I was going to attempt for a practical solution: train more grantmakers. Allow grantmakers to make exploratory grants unilaterally to speed things up. Fund applicants according to a subscription model. Be especially liberal with the first grant, but only fund them for a small period. Talk to them after every period. Discontinue funds as soon as you stop believing in their project. Give them a cooldown period between projects so they don't leech off of you.

I have added a note to my RAISE post-mortem, which I'm cross-posting here:

Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn't I, or anyone around me, think of this? I'm honestly perplexed. It's worth thinking about.

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsourcing ideas, I'm even less willing to believe that that large EA foundations have simply missed a good opportunity.

I would like to respond specifically to this reasoning.

Consider the scenario that a random (i.e. probably not EA-affiliated) genius comes up with an idea that is, as a matter of fact, high value. 

Simplifying a lot, there are two possibilities here: X their idea falls within the window of what the EA community regards as effective, and Y it does not.

Probabilities for X and Y could be hotly debated, but I'm comfortable stating that the probability for X is less than 0.5. i.e. we may have a high success rate within our scope of expertise, but the share of good ideas that EA can recognize as good is not that high. 

The ideas that reach Openphil via the EA community might be good, but not all good ideas make it through the EA community.

To me, reducing your weirdness is equivalent to defection in a prisoner's dilemma, where the least weird person gets the most reward but the total reward shrinks as the total weirdness shrinks.

Of course you can't just go all-out on weirdness, because the cost you'd incur would be too great. My recommendation is to be slightly more weird than average. Or: be as weird as you perceive you can afford, but not weirder. If everyone did that, we would gradually expand the range of acceptable things outward.

Load more