JB

Joel Becker

@ Various
901 karmaJoined Working (6-15 years)Working (0-5 years)London, UK
joel-becker.com/

Comments
63

Thank you for sharing these reflections, Asya! And for your service as the LTFF chair!

I feel confused about the difficulty of fund manager hiring. One source of confusion comes from the importance of expertise (/doing-good-direct-work), as you touch on in the post:

Historically, we've had trouble hiring fund managers, especially in technical AI alignment, largely for the reasons mentioned above (people generally want to focus on their work). I think there's an extent to which I've contributed to our difficulty in hiring, in that I'm not sold that people doing good direct work should be taking on additional responsibilities as fund managers (so haven’t been great at convincing people to join)

In addition to the high opportunity cost of time for expert fund managers, I would have guessed that small differences between the EV of marginal grants pushes in the direction of expertise being less important. But then I don’t understand why hiring fund managers would be unusually challenging. Wouldn’t deemphasizing expertise increase the pool of eligible fund managers, thereby making hiring easier?

(Perhaps I‘m confusing relative and absolute difficulty — expertise being less important would make hiring relatively easier, but it’s still absolutely tough?)

The second source of confusion comes reconciling the difficulty of finding fund managers with the fact that FTXFF and Manifund seemed to find part-time grantmakers quite easily. I don’t know how many regrantors and grant-recommenders FTXFF ended up with, but the last rumour I heard was between 100 and 200. Manifund are currently on 16 and seem keen to expand. I would’ve thought that there is some intersection between regrantors with the top, say, 30% of grantmaking records by your light, those satisfying other hiring criteria you might have, and those currently willing to work with LTFF.

Is the difference in the scale of grants LTFF fund managers make vs regrantors? Or expectations around regularity of response (regrantors are more flexible)? Or you’re not excited about the records of regrantors in general? Or something else?

I have made early steps towards this. So far funder interest has been a blocker, although perhaps that doesn’t say much about the value of the idea in general.

This is a phenomenal resource. Well done Aron!

Compared to whatever!

The basic case -- (1) existing investigation of what scientific theories of consciousness imply for AI sentience plausibly suggests that we should expect AI sentience to arrive (via human intention or accidental emergence) in the not-distant future, (2) this seems like a crazy big deal for ~reasons we can discuss~, and (3) almost no-one (inside EA or otherwise) is working on it -- rhymes quite nicely with the case for work on AI safety.

Feels to me like it would be easy to overemphasize tractability concerns about this case. Again by analogy to AIS:

  1. Seems hard; no-one has made much progress so far. (To first approximation, no-one has tried!)
  2. SOTA models aren't similar enough to the things we care about. (Might get decreasingly true although, in any case, seems like we could plausibly better set ourselves up using only dissimilar models.)

But I'm guessing that gesturing at my intuitions here might not be convincing to you. Is there anything you disagree with in the above? If so, what? If not, what am I missing? (Is it just a quantitative disagreement about magnitude of importance or tractability?)

Not for the main role any more, but excited to hear about people who might be interested in contributing!

True, but an appropriate number given the topic’s importance and neglectedness?

Agree.

Really glad this work is being done; grateful to Nikos for it! The "yes, and" is that we're nowhere near the frontier of what's possible.

You did a great job, Rob (and Luisa)! :)

Thanks for running this, Nuno! I had fun participating!

I agree with

My sense is that similar contests with similar marketing should expect a similar number of entries.

if we're really strict about "similar marketing." But, when considering future contests, there's no need to hold that constant. The fact that e.g. Misha Yagudin had not heard of this prize seems shocking and informative to me. I think you could invest more time into thinking about how to increase engagement!

Relatedly, I have now had the following experience a number of times. I don't know how to solve some problem in squiggle (charting multiple plots, feeding in large parameter dictionaries, taking many samples of samples, saving samples for use outside of squiggle, embedding squiggle in a web app, etc.etc.). I look around squiggle documentation searching for a solution, and can't find it. I message one of the squiggle team. The squiggle team member has an easy and (often but not always) already-implemented-elsewhere solution that is not publicly available in any documentation or similar. I leave feeling very happy about the existence of squiggle and helpfulness of its team! But another feeling I have is that the squiggle team could be more successful if it invested more time in the final, sometimes boring mile of examples/documentation/evangelism, rather than chasing the next more intellectually interesting project.

Nice post! I would've already said this in feedback but, to reiterate publicly: I thought that the first ever EAGx in Latin America went fantastically! :) Well done to you all!

Load more