Agree. I don't know if you meant this too, but I also think that focusing on one particular person who manages to have a lot of influence among the fellows of his or her local EA group/organisation, or generally creating a kind of cult of personality on a few leading personalities of the movement, can be dangerous in the long run. SBF is a kind of example of the unilateralist curse somehow.
According to the CLR, since resource acquisition is an instrumental goal - regardless of the utility function of the AGI - , it is possible that such goal can lead to a race where each AGI can threaten others such that the target has an incentive to hand over resources or comply with the threateners’ demands. Is such a conflict scenario (potentially leading to x-risks) from two AGIs possible if these two AGIs have a different intelligence level? If so, isn't there a level of intelligence gap at which x-risks become unlikely? How to characterize this function (the probability of the threat being executed with respect to the intelligence gap between the two AGIs)? In other words, the question here is something like: how does the distribution of agent intelligence affect the threat dynamic? Has any work already been done on this?
I have the feeling that there is a tendency in the AI safety community to think that if we solve the alignment problem, we’re done and the future must be necessarily flourishing (I observe that some EAs say that either we go extinct or it’s heaven on earth depending on the alignment problem, in a very binary way actually). However, it seems to me that post aligned-AGI scenario merit attention as well: game theory provides us a sufficient rationale to state that even rational agents (in this cases >2 AGIs) can take sub-optimal decisions (including catastrophic scenarios) when face with some social dilemma. Any thoughts on this please?
Great post thanks!
I'm not sure to what extent you intended to differentiate AI governance from AI policy when writing this post. It seems to me that the AI safety community tends to underestimate the importance of directly engaging within more official institutions to do policy work. These may have small teams working on AI policy, but their capacity for action is considerable, given the relatively new field of GPAI governance (e.g OECD, governements). This contrasts with conducting research within the mentioned organizations (or in other words 100% “EA aligned” organisations). It appears to me that doing "AI policy implementation" can eventually have a larger direct impact, particularly under short timelines, as compared to AI governance research role.