Thanks, this is a great comment.
The first and second examples seems pretty good, and useful reference points.
The third example don't seem like they are nearly as useful though. What's particularly unusual about this case is that there are two useful inputs to AI R&D -- cognitive labour and compute for experiments -- and the former will rise very rapidly but the other will not. In particular, I imagine CS departments also saw compute inputs growing in that time. And I imagine some of the developments discussed (eg proofs about algorithms) only have cognitive labour as an input.
The second example (quant finance), I suppose the 'data' input to doing this work stayed constant while the cognitive effort rose. So it works as an example. Though it may be a field with an unusual superabundance of data, unlike ML.
The first example involves a kind of 'data overhang' that the cognitive labour quickly eats up. Perhaps in a similar way AGI will "eat up" all the insights that are implicit in existing data from ML experiments.
What i think all the examples currently lack is a measure of how the pace of overall progress changed that isn't completely made up. Could be interested to list out the achievements in each time period and ask some experts what they think. There an interesting empirical project here I think.
All the examples also lack anything like the scale to which cognitive labour will increase with AGI. This makes comparison even harder. (Though if we can get 3X speed-ups from mild influxes of cognitive labour, that makes 10X speed ups more plausible.)
I tried to edit the paragraph (though LW won't let me) to:
I think we don't know what perspective is right, we haven't had many examples where a huge amount of cognitive labour has been dumped on a scientific field and other inputs to progress have remained constant and we've accurately measured how much overall progress in that field accelerates. (Edit: though this comment suggests some interesting examples.)
- I think utilitarianism is often a natural generalization of "I care about the experience of XYZ, it seems arbitrary/dumb/bad to draw the boundary narrowly, so I should extend this further" (This is how I get to utilitarianism.) I think the AI optimization looks considerably worse than this by default.
Why is this different between AIs and humans? Do you expect AIs to care less about experience than humans, maybe bc humans get reward during life-time learning about AIs don't get reward during in context learning?
- I can directly observe AIs and make predictions of future training methods and their values seem to result from a much more heavily optimized and precise thing with less "slack" in some sense. (Perhaps this is related to genetic bottleneck, I'm unsure.)
Can you say more about how slack (or genetic bottleneck) would affect whether AIs have values that are good by human lights?
- Current AIs seem to use the vast, vast majority of their reasoning power for purposes which aren't directly related to their final applications. I predict this will also apply for internal high level reasoning of AIs. This doesn't seem true for humans.
In what sense do AIs use their reasoning power in this way? How that that affect whether they will have values that humans like?
I agree that bottlenecks like the ones you mention will slow things down. I think that's compatible with this being a "jump in forward a century" thing though.
Let's consider the case of a cure for cancer. First of all, even if it takes "years to get it out due to the need for human trials and to actually build and distribute the thing" AGI could still bring the cure forward from 2200 to 2040 (assuming we get AGI in 2035).
Second, the excess top-quality labour from AGI could help us route-around the bottlenecks you mentioned:
It seems to me like you disagree with Carl because you write:
- The reason for an investor to make a bet, is that they believe they will profit later
- However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
- Therefore, there is no way for them to win by betting on near-term TAI
So you're saying that investors can't win from betting on near-term TAI. But Carl thinks they can win.
Thanks for these great questions Ben!
To take them point by point:
I like the vividness of the comparisons!
A few points against this being nearly as crazy as the comparisons suggest: