Compared to whatever!
The basic case -- (1) existing investigation of what scientific theories of consciousness imply for AI sentience plausibly suggests that we should expect AI sentience to arrive (via human intention or accidental emergence) in the not-distant future, (2) this seems like a crazy big deal for ~reasons we can discuss~, and (3) almost no-one (inside EA or otherwise) is working on it -- rhymes quite nicely with the case for work on AI safety.
Feels to me like it would be easy to overemphasize tractability concerns about this case. Again by analogy to AIS:
But I'm guessing that gesturing at my intuitions here might not be convincing to you. Is there anything you disagree with in the above? If so, what? If not, what am I missing? (Is it just a quantitative disagreement about magnitude of importance or tractability?)
Thanks for running this, Nuno! I had fun participating!
I agree with
My sense is that similar contests with similar marketing should expect a similar number of entries.
if we're really strict about "similar marketing." But, when considering future contests, there's no need to hold that constant. The fact that e.g. Misha Yagudin had not heard of this prize seems shocking and informative to me. I think you could invest more time into thinking about how to increase engagement!
Relatedly, I have now had the following experience a number of times. I don't know how to solve some problem in squiggle (charting multiple plots, feeding in large parameter dictionaries, taking many samples of samples, saving samples for use outside of squiggle, embedding squiggle in a web app, etc.etc.). I look around squiggle documentation searching for a solution, and can't find it. I message one of the squiggle team. The squiggle team member has an easy and (often but not always) already-implemented-elsewhere solution that is not publicly available in any documentation or similar. I leave feeling very happy about the existence of squiggle and helpfulness of its team! But another feeling I have is that the squiggle team could be more successful if it invested more time in the final, sometimes boring mile of examples/documentation/evangelism, rather than chasing the next more intellectually interesting project.
Thank you for sharing these reflections, Asya! And for your service as the LTFF chair!
I feel confused about the difficulty of fund manager hiring. One source of confusion comes from the importance of expertise (/doing-good-direct-work), as you touch on in the post:
In addition to the high opportunity cost of time for expert fund managers, I would have guessed that small differences between the EV of marginal grants pushes in the direction of expertise being less important. But then I don’t understand why hiring fund managers would be unusually challenging. Wouldn’t deemphasizing expertise increase the pool of eligible fund managers, thereby making hiring easier?
(Perhaps I‘m confusing relative and absolute difficulty — expertise being less important would make hiring relatively easier, but it’s still absolutely tough?)
The second source of confusion comes reconciling the difficulty of finding fund managers with the fact that FTXFF and Manifund seemed to find part-time grantmakers quite easily. I don’t know how many regrantors and grant-recommenders FTXFF ended up with, but the last rumour I heard was between 100 and 200. Manifund are currently on 16 and seem keen to expand. I would’ve thought that there is some intersection between regrantors with the top, say, 30% of grantmaking records by your light, those satisfying other hiring criteria you might have, and those currently willing to work with LTFF.
Is the difference in the scale of grants LTFF fund managers make vs regrantors? Or expectations around regularity of response (regrantors are more flexible)? Or you’re not excited about the records of regrantors in general? Or something else?