I am an undergraduate majoring in applied math and I am trying to spin towards alignment research. I am finishing up a course in the math of machine learning offered by UIUC. Towards helping others who are in a similar position, I've been working on an article about who might be well-suited for the course, the concepts that you learn about, and ultimately how helpful the course is in becoming a better alignment researcher in comparison to other avenues. At this time I am still writing the post, but I am not very certain about the verity of even the core arguments that I make. So I hope that I can get feedback from others. If the article has misconceptions, the feedback would help me rectify them before I publish this for public viewing. (A potential counterpoint is that the voting mechanism would result in the prominence of the post organically being reduced if many people find fault with it.)
Send me a message or comment if you're interested. I appreciate anyone who'd be willing to provide feedback on this.
I'm curious why Zach thinks that it would be ideal for leading AI labs to be in the US. I tried to consider this from the lens of regulation. I haven't read extensively on comparisons of what regulations there are for AI in various countries, but my impression is that the US federal government is sitting on their laurels with respect to regulation of AI, although state and municipal governments provide a somewhat different picture, and whilst the intentions of each are different, the EU and the UK have been moving much more swiftly than the US government.
My opinion would change if regulation doesn't play a large role in how successful an AI pause is, eg if industry players could voluntarily practice restraint. There are also other factors that I'm not considering.