For Question 2, should each submission define what timeframe they're considering for "will suffer"?
Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?
I understand two timeframes here - one explicit and one implicit. The explicit timeframe of "by 2070" makes sense to me.
The implicit timeframe of "will suffer" is ambiguous to me and therefore should be defined in the submission. Open Philanthropy seems to emphasize this century's importance. I plan to limit my estimate and reasoning to be "catastrophe by the end of this century." For this contest, it seems unlikely the judges want to understand the tail of the yearly distribution (i.e. AGI deployed by 2070 but goes rouge in 2500 for some esoteric reason).
For Question 2, should each submission define what timeframe they're considering for "will suffer"?
I understand two timeframes here - one explicit and one implicit. The explicit timeframe of "by 2070" makes sense to me.
The implicit timeframe of "will suffer" is ambiguous to me and therefore should be defined in the submission. Open Philanthropy seems to emphasize this century's importance. I plan to limit my estimate and reasoning to be "catastrophe by the end of this century." For this contest, it seems unlikely the judges want to understand the tail of the yearly distribution (i.e. AGI deployed by 2070 but goes rouge in 2500 for some esoteric reason).