Co-president of Stanford EA and Stanford AI Alignment; previous organizer at the Stanford Alt Protein Project. B.S. Computer Science 2018-2023. Most interested in AI safety and animal welfare.
AI alignment/safety community building: I'm starting a Stanford AI Alignment club out of Stanford Effective Altruism, how should we operate this club and its activities over the near year (to start) in order to do the most good?
General community building: How can I help improve Stanford Effective Altruism? What does CB look like outside university?
Empirical AI alignment work: What does it look like from a variety of perspectives (maybe not just Redwood and Anthropic)? Does my career plan for skilling-up look solid? Should I try to go to grad school?
Animal welfare: What are the latest promising strategies for farmed animal welfare? What do we do about wild animal welfare?
Can talk about organizing and participating in university groups, particularly Stanford Effective Altruism, Stanford AI Alignment, and the Stanford Alt Protein Project. Generally tied into the Bay-area EA and AI alignment communities. Have been upskilling and helping a few peers upskill in machine learning skills for working on empirical AI safety.
Pretty ambitious, thanks for attempting to quantify this!
Having only quickly skimmed this and not looked into your code (so could be my fault), I find myself a bit confused about the baselines: funding a single research scientist (I'm assuming this means at a lab?) or Ph.D. student for even 5 years seems to unclearly equivalent to 87 or 8 adjusted counterfactual years of research--I'd imagine it's much less than that. Could you provide some intuition for how the baseline figures are calculated (maybe you are assuming second-order effects, like funded individuals getting interested in safety and doing more or it or mentoring others under them)?
climate since this is the one major risk where we are doing a good job
Perhaps (at least in the United States) we haven't been doing a very good job on the communication front for climate change, as there are many social circles where climate change denial has been normalized and the issue has become very politically polarized with many politicians turning climate change from an empirical scientific problem into a political "us vs them" problem.
around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance
I'm on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they've rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn't strongly die (i.e. I don't think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).
I think something like this is good to make happen though, and if others want to help make it happen, let me know and I'll loop you in with the people who were discussing it.
Excited for this!
Nit: your logo seems to show the shrimp a bit curled up, which iirc is a sign that they're dead and not a happy freely living shrimp (though it's good thay they're blue and not red).
Some discussion of this consideration in this thread: https://forum.effectivealtruism.org/posts/bBoKBFnBsPvoiHuaT/announcing-the-ea-merch-store?commentId=jaqayJuBonJ5K7rjp
aren't more reliable than chance
Curious what you mean by this. One version of chance is "uniform prediction of AGI over future years" which obviously seems worse than Metaculus, but perhaps you meant a more specific baseline?
Personally, I think forecasts like these are rough averages of what informed individuals would think about these questions. Yes, you shouldn't defer to them, but it's also useful to recognize how that community's predictions have changed over time.
Thanks for this post! I appreciate the transparency, and I'm sorry for all this suckiness.
Could one additional easyish structural change be making applications due even earlier for EAGx? I feel like the EA community has a bad tendency of having apps for things open until very soon before the actual thing, and maybe an earlier due date gives people more time to figure out if they're going and creates more buffer before catering number deadlines. Ofc, this costs some extra organizer effort as you have to plan more ahead, but I expect that's more of a shifting thing rather than an whole lot of extra work.
That makes sense, thanks for the explanation! Yeah still a bit confused why they chose different numbers of years for the scientist and PhD, how those particular numbers arise, and why they're so different (I'm assuming it's 1 year of scientist funding or 5 years of PhD funding).