A

anakaryan

36 karmaJoined

Comments
5

Intuitively I feel that this process does generalise, and I would personally be really keen to read case studies of an idea/strategy that was moved from left to right in the diagram above. i.e a thinker initially identifies a problem, and over the following years or decades it moves to tactics research, then policy development, then advocacy and finally is implemented.  I doubt any idea in AI governance has gone through the full strategy-to-implementation lifecycle, but maybe one in climate change, nuclear risk management, or something else has? Would appreciate if anyone could link case studies of this sort!

I’m trying to get a better picture in my head of what good work looks like in this space - i.e, existing work that has given us improved strategic clarity. This could be with regards to TAI itself, or a technology such as nuclear weapons.

Imo, some examples of valuable strategic work/insights are:

I’m curious as to any other examples of existing work that you think fit into the category of valuable strategic work, of the type that you talk about in this post.

This is a really comprehensive post on pursuing a career in technical AI safety, including how to test fit and skill up 

Hi Will, 

From listening to your podcast with Ali Abdaal, it seems that you're relatively optimistic about humanity being able to create aligned AI systems. Could you explain the main reasons behind your thinking here?

Thanks!