(Post 6/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Explicit backchaining is one way to do prioritisation. I sometimes forget that there are other useful heuristics, like:
(Post 5/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Note to self: more detailed but less structured version of these notes here.
(Post 4/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
I’ve spent a bit of time over the last year trying to form better judgement. Dumping some notes here on things I tried or considered trying, for future reference.
I think this framing of the exercise might have been mentioned to me by Michael Aird.
(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
(Post 2/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.
Reasons why this seems important to get clarity on:
Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.
Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.
Thanks JP!