TD

Tom_Davidson

815 karmaJoined

Comments
48

I like the vividness of the comparisons!

A few points against this being nearly as crazy as the comparisons suggest:

  • GPT-2030 may learn much less sample efficiently, and much less compute efficiently, than humans. In fact, this is pretty likely. Ball-parking, humans do 1e24 FLOP before they're 30, which is ~20X less than GPT-4. And we learn languages/maths from way fewer data points. So the actual rate at which GPT-2030 itself gets smarter will be lower than the rates implied. 
    • This is a sense of "learn" as in "improves its own understanding". There's another sense which is "produces knowledge for the rest of the world to use, eg science papers" where I think your comparisons are right. 
  • Learning may be bottlenecked by serial thinking time past a certain point, after which adding more parallel copies won't help. This could make the conclusion much less extreme.
  • Learning may also be bottlenecked by experiments in the real world, which may not immediately get much faster.

Thanks, this is a great comment.

The first and second examples seems pretty good, and useful reference points.

The third example don't seem like they are nearly as useful though. What's particularly unusual about this case is that there are two useful inputs to AI R&D -- cognitive labour and compute for experiments -- and the former will rise very rapidly but the other will not. In particular, I imagine CS departments also saw compute inputs growing in that time. And I imagine some of the developments discussed (eg proofs about algorithms) only have cognitive labour as an input.

The second example (quant finance), I suppose the 'data' input to doing this work stayed constant while the cognitive effort rose. So it works as an example. Though it may be a field with an unusual superabundance of data, unlike ML. 

The first example involves a kind of 'data overhang' that the cognitive labour quickly eats up. Perhaps in a similar way AGI will "eat up" all the insights that are implicit in existing data from ML experiments.

What i think all the examples currently lack is a measure of how the pace of overall progress changed that isn't completely made up. Could be interested to list out the achievements in each time period and ask some experts what they think. There an interesting empirical project here I think. 

All the examples also lack anything like the scale to which cognitive labour will increase with AGI. This makes comparison even harder. (Though if we can get 3X speed-ups from mild influxes of cognitive labour, that makes 10X speed ups more plausible.)

I tried to edit the paragraph (though LW won't let me) to: 

I think we don't know what perspective is right,  we haven't had many examples where a huge amount of cognitive labour has been dumped on a scientific field and  other inputs to progress have remained constant and we've accurately measured how much overall progress in that field accelerates. (Edit: though this comment suggests some interesting examples.) 

  •  I think utilitarianism is often a natural generalization of "I care about the experience of XYZ, it seems arbitrary/dumb/bad to draw the boundary narrowly, so I should extend this further" (This is how I get to utilitarianism.) I think the AI optimization looks considerably worse than this by default.

Why is this different between AIs and humans? Do you expect AIs to care less about experience than humans, maybe bc humans get reward during life-time learning about AIs don't get reward during in context learning?

  • I can directly observe AIs and make predictions of future training methods and their values seem to result from a much more heavily optimized and precise thing with less "slack" in some sense. (Perhaps this is related to genetic bottleneck, I'm unsure.)

Can you say more about how slack (or genetic bottleneck) would affect whether AIs have values that are good by human lights?

  • AIs will be primarily trained in things which look extremely different from "cooperatively achieving high genetic fitness".

They might well be trained to cooperate with other copies on tasks, if this is they way they'll be deployed in practice?

  • Current AIs seem to use the vast, vast majority of their reasoning power for purposes which aren't directly related to their final applications. I predict this will also apply for internal high level reasoning of AIs. This doesn't seem true for humans.

In what sense do AIs use their reasoning power in this way? How that that affect whether they will have values that humans like?

I agree that bottlenecks like the ones you mention will slow things down. I think that's compatible with this being a "jump in forward a century" thing though.

Let's consider the case of a cure for cancer. First of all, even if it takes "years to get it out due to the need for human trials and to actually build and distribute the thing" AGI could still bring the cure forward from 2200 to 2040 (assuming we get AGI in 2035).

Second, the excess top-quality labour from AGI could help us route-around the bottlenecks you mentioned:

  • Human trials: AGI might develop ultra-high-reliability ways to verify that drugs work without human trials. That could either lead to a change in regulatory requirements or to people buying the AGI-designed drugs sooner in countries where that's legal.
  • Manufacturing and distributing the drug: Imagine if we'd had 100 million of the most competent humans working (remotely) full time on optimising every step of the manufacturing+distribution process for COVID? They could have:
    • Planned out how to use all the US' available manufacturing and transportation infrastructure maximally efficiently
    • Give real-time instructions to all the humans working in those industries so that they were more productive and better coordinated.
    • Recruit and train of new human workers (again instructing them in real-time) to increase the available labour.
    • More speculatively, it might not take long for AGI to design robots that could do the physical labour needed to manufacture and distribute the vaccines. 

It seems to me like you disagree with Carl because you write:

  • The reason for an investor to make a bet, is that they believe they will profit later
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
  • Therefore, there is no way for them to win by betting on near-term TAI

So you're saying that investors can't win from betting on near-term TAI. But Carl thinks they can win.

Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.

Could you say more about what you mean by this?

Thanks for these great questions Ben!

To take them point by point:

  1. The CES task-based model incorporates Baumol effects, in that after AI automates a task the output on that task increases significantly and so its importance to production decreases. The tasks with low output become the bottlenecks to progress. 
    1. I'm not sure what exactly you mean by technological deflation. But if AI automates therapy and increases the amount of therapists by 100X then my model won't imply that the real $ value of therapy industry increases 100X. The price of therapy falls and so there is a more modest increase in the value of therapy.
    2. Re technological unemployment,  the model unrealistically assumes that when AI automates (e.g.) 20% of tasks, human workers are immediately reallocated to the remaining 80%. I.e. there is no unemployment until AI automates 100% of tasks. I think this makes sense for things like Copilot that automates/accelerates one part of a job; but is wrong for a hypothetical AI that fully automates a particular job.  Modelling delays to reallocating human labour after AI automation would make takeoff slower. My guess is that this will be a bigger deal for the general economy than for AI R&D. Eg maybe AI fully automates the trucking industry, but I don't expect it to fully automate a particular job within AI R&D. Most of the action with capabilities takeoff speed is with AI R&D (the main effect of AI automation is to accelerate hardware and software progress), so I don't think modelling this better would affect takeoff speeds by much. 
    3. Profit incentives. This is a significant weakness of the report - I don't explicitly model the incentives faced by firms to invest in AI R&D and do large training runs at all. (More precisely, I don't endogenise investment decisions as being made to maximise future profits, as happens in some economic models. Epoch is working on a model along these lines.) Instead I assume that once enough significant actors "wake up" to the strategic and economic potential of AI, investments will rise faster than they are today. So one possibility for slower takeoff is that AI firms just really to capture the value they create, and can't raise the money to go much higher than (e.g.) $5b training runs even after many actors have "woken up". 
  2. I am using semi-endogenous growth models to predict the rate of  future software and hardware progress, so they're very important. I don't know of a better approach to forecasting how investments in R&D will translate to progress, without investigating the details of where specifically progress might come from (I think that kind of research is very valuable, but it was far beyond the scope of this project). I think semi-endogenous growth models are a better fit to the data than the alternatives (e.g. see this). I do think it's a valid perspective to say "I just don't trust any method that tries to predict the rate of  technological progress from the amount of R&D investment", but if you do want to use such a method then I think this is the ~best you can do. In the Monte Carlo analysis, I put large uncertainty bars on the rate of returns to future R&D to represent the fact that the historical relationship between R&D investment and observed progress may fail to hold in the future. 
    1. I don't expect the papers you link to change my mind about this, from reading the abstracts. It seems like your second link is a critique of endogenous growth theory but not semi-endog theory (it says "According to endogenous growth theory, permanent changes in certain policy variables have permanent effects on the rate of economic growth" but this isn't true of semi-endog theories).  It seems like your first link is either looking at ~irrelevant evidence or drawing a the incorrect conclusion (here's my perspective on the evidence mentioned in its abstract: "the slowing of growth in the OECD countries over the last two decades [Tom: I expect semi-endog theories can explain this better than the neoclassical model. The population growth rate of the scientific workforce as been slowing so we'd expect growth so slow as well; the neoclassical model as (as far as I'm aware) no comparable mechanism for explaining the slowdown.] ; the acceleration of growth in several Asian countries since the early 1960s [this is about catch-up growth so wouldn't expect semi-endog theories to explain it; semi-endog theories are designed to explain growth of the global technological frontier]  ; studies of the determinants of growth in a cross-country context [again, semi-endog growth models aren't designed to explain this kind of thing at all]; and sources of the differences in international productivity levels [again again, semi-endog growth models aren't designed to explain this kind of thing at all]". 
    2. You could see this as an argument for slower takeoff if you think "I'm pretty sure that looking into the details of where future progress might come from would conclude that progress will be slower than is predicted by the semi-endogenous model", although this isn't my current view
    3. One way to think about this is to start from a method you may trust more that using semi-endog models: just extrapolating past trends in tech progress. But you might worry about this method if you expect R&D inputs to the relevant fields to rise much faster than in recent history (because you expect people to invest more and you expect AI to automate a lot of the work). Naively, your method is going to underestimate the rate of progress. So then using a semi-endog model addresses this problem. It matches the predictions of your initial method when R&D inputs continue to rise at their recent historical rate, but predicts faster progress in scenarios where R&D inputs rise more quickly than in recent history.
  3. > "Does this mean that, if you don't think a discontinuous jump in AI capabilities is likely, you should expect slower take-off than your model suggests? How substantial is this effect?" The results of the Monte Carlo don't include any discontinuous jumps (beyond the possibility that there's a continuous but very-fast transition from "AI that isn't economically useful" to AGI). So adjusting for discontinuities would only make takeoff faster. My own subjective probabilities do increase the probability of very fast takeoff by 5-10% to account for the possibility of other discontinuities. 
    1. "In section 8, the only uncertainty pointing in favour of fast takeoff is "there might be a discontinuous jump in AI capabilities"" There are other ways that I think my conclusions might be biased in favour of slower takeoff, in particular the ones mentioned here.
  4. "How did you model the AI production function? Relatedly, how did you model constraints like  energy costs, data costs,  semiconductor costs,  silicon costs etc.?"
    1. In the model the capability of the AI trained just depends on the compute used in training and the quality of AI algorithms used; you combine the two multiplicatively. I didn't model energy/semiconductor/silicon costs except as implicit in FLOP/$ trends); I didn't model or data costs (which feels like a significant limitation). 
    2. The CES task-based model is used as the production function for R&D to improve AI algorithms ("software") and AI chips ("hardware"), and for GDP. It gives slower takeoff than if you used Cobb Douglas bc you get more bottlenecked by the tasks  AI still can't perform (e.g. tasks done by humans, or tasks done with equipment like experiments).
      1. There's a parameter rho that controls how close the behaviour is to Cobb Douglas vs a model with very binding bottlenecks.  I ultimately settled on a values that make GDP much more bottlenecked by physical infrastructure than R&D progress. This was based on it seeming to me that you could speed up R&D a lot by uploading the smartest minds and running  billions of them at 100X speed, but couldn't increase GDP by nearly as much by having those uploads try to provide people with goods and services (holding the level of technology fixed). 
  5. "I'm vaguely worried that the report proves too much, in that I'd guess that the basic automation of the industrial revolution also automated maybe 70%+ of tasks by pre-industrial revolution GDP." I agree with this! I don't think it undermines the report - I discuss it here.  Interested to hear pushback if you disagree.
Load more