Studied computer science at UW-Madison. In terms of career paths, AI safety, computer security (possibly for GCR reduction), computational modeling for alternative proteins, general EA-related research, and earning to give are all current options on the table and I'm trying to assess each. Beyond these areas, I have a wide range of interests within EA.
Anonymous Feedback Form: If you have any feedback for me on anything and feel inclined to fill out this form, I would very much appreciate it! (idea credit: Michael Aird)
Sorry for the incredibly late response! I think that all makes sense--thanks for sharing!
I think it also ends up depending a lot on one's particular circumstances: do you have a unique / timely opportunity to "jump in"? Do you have a clear path forward (i.e. options that you even could "jump into")? How uncertain do you feel about which path is right for you and how much would you have to reduce your uncertainty to feel satisfied?
It's funny you mention that "it is easy to come to the conclusion that one should become a full-time philosopher or global priorities researcher to straighten these uncertainties out"--I've recently been thinking that this could actually be a good move in my particular circumstance. Global priorities research seems like a potentially very high impact area in itself. On top of that, one could use that time to become more informed about other cause areas that might be even higher impact for them. However, I'm not sure how useful this "scouting" aspect / approach would be. E.g. some of the areas I think could be higher impact for me than GPR are ones that I'm already pretty well-aware of. But I guess it could still be an opportunity to learn more about those areas, depending on what the research (done for GPR) would entail.
Hey all!
Here's a short page on vegan nutrition for anyone trying to learn more about it / get into veganism.
If someone doesn't have much prior ML experience, can they still be a TA, assuming they have a month to dedicate to learning the curriculum before the program starts?
If yes, would the TA's learning during that month be self-guided? Or would it take place in a structure/environment similar to the one the students will experience?
This sounds really exciting!
I'm a bit unclear on the below point:
I think that MLAB is a good use of time for many people who don’t plan to do technical alignment research long term but who intend to do theoretical alignment research or work on other things where being knowledgeable about ML techniques is useful.
Do you mean you don't think MLAB would be a good use of time for people who do "plan to do technical alignment research long term"?
Thanks for this!
Does "Learning the Basics" specifically mean learning AI Safety basics, or does this also include foundational AI/ML (in general, not just safety) learning? I'm wondering because I'm curious if you mean that the things under "Learning the Basics" could be done with little/no background in ML.
When I first read this and some of the other comments, I think I was in an especially sensitive headspace for guilt / unhealthy self-pressure. Because of that & the way it affected me at the time, I want to mention for others in similar headspaces: Nate Soares' Replacing Guilt series might be helpful (there's also a podcast version). Also, if you feel like you need to talk to someone about this and/or would like ideas for additional resources (not sure how many I have, but at least some) please feel free to direct message me.
Good point! I actually had that same misunderstanding I think too!