Hide table of contents

If you're forecasting AI progress or asking someone about their timelines, what event should you focus on?

tl;dr it's messy and I don't have answers.

AGI, TAI, etc. are bad. Mostly because they are vague or don't capture what we care about.

  • AGI = artificial general intelligence (no canonical operationalization)
    • This is vague/imprecise, and is used vague/imprecisely
    • We care about capability-level; we don't directly care about generality
    • Maybe we should be paying attention to specific AI capabilities, AI impacts, or conditions for AI catastrophe
  • TAI = transformative AI, originally defined as "AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution" (see also discussion here)
    • "a transition comparable to . . . the agricultural or industrial revolution" is vague, and I don't know what it looks like
    • "precipitates" is ambiguous. Suppose for illustration that AI in 2025 would take 10 years to cause a transition comparable to the industrial revolution (if there was no more AI progress, or no more AI progress by humans), but AI in 2026 would take 1 year. Then the transition is precipitated by the 2026-AI, but the 2025-AI was capable enough to precipitate a transition. Is the 2025-AI TAI? If so, "TAI" seems to miss what we care about. And regardless, whether a set of AI systems precipitates a transition comparable to the industrial revolution is determined not just by the capabilities and other properties of the systems, but also by other facts about the world, which is weird. Also note that some forecasters believe that current Al would be "eventually transformative" but future Al will be transformative faster, so under some definitions, they believe we already have TAI.
    • This is often used vaguely/imprecisely
  • HLAI = human-level AI (no canonical operationalization)
    • This is kinda vague/imprecise but can be operationalized pretty well, I think
    • This may come after the stuff we should pay attention to
  • HLMI = high-level machine intelligence, defined as "when unaided machines can accomplish every task better and more cheaply than human workers"
    • This will come after the stuff we should pay attention to
  • PONR = (AI-induced) point of no return, vaguely defined as "the day we AI risk reducers lose the ability to significantly reduce AI risk"
    • But we may lose that ability gradually rather than in a binary, threshold-y way
    • And forecasting PONR flows through forecasting narrower events

More: APS-AIPASTAprepotent AIfractional automation of 2020 cognitive tasks[three levels of transformativeness]; various operationalizations for predictions (e.g., MetaculusManifoldSamotsvety); and various definitions of AGI, TAI, and HLAI. Allan Dafoe uses “Advanced AI” to "gesture[] towards systems substantially more capable (and dangerous) than existing (2018) systems, without necessarily invoking specific generality capabilities or otherwise as implied by concepts such as 'Artificial General Intelligence' ('AGI')." Some people talk about particular visions of AI, such as CAIStech company singularity, and perhaps PASTA.

Some forecasting methods are well-suited for predicting particular kinds of conditions. For example, biological anchors most directly give information about time to humanlike capabilities. And "Could Advanced AI Drive Explosive Economic Growth?" uses economic considerations to give information about economic variables; it couldn't be adapted well for other kinds of predictions.

Operationalizations of things-like-AGI are ideally

  • useful or tracking something we care about
    • If you knew what specific capabilities would be a big deal, you could focus on predicting those capabilities
  • easy to forecast
  • maybe simple or concrete
  • maybe exclusively determined by the properties/capabilities of the AI, rather than also other facts about the world

If you're eliciting forecasts, like in a survey, make sure respondents interpret what you say correctly. In particular, things you should clarify for timelines surveys (of a forecasting-sophisticated population like longtermist researchers, not like the public) are:

  • Conditional on no catastrophe before AGI-or-whatever or not
  • Independent impression or all-things-considered view
  • (sometimes) when it happens vs when it is feasible
  • (sometimes) whether to counterfactually condition on progress not slowing due to increased safety concerns, not slowing due to interventions by the AI safety community, or something similar

Forecasting a particular threshold of AI capabilities may be asking the wrong question. To inform at least some interventions, "it may be more useful to know when various 'pre-TAI' capability levels would be reached, in what order, or how far apart from each other, rather than to know when TAI will be reached" (quoting Michael Aird). "We should think about the details of different AI capabilities that will emerge over time [] and how those details will affect the actions we can profitably take" (quoting Ashwin Acharya).

This post draws on some research by and discussion with Michael Aird, Daniel Kokotajlo, Ashwin Acharya, and Matthijs Maas.

30

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Helpful post, Zach! I think it's more useful and concrete to focus on asking about specific capabilities instead of asking about AGI/TAI etc. and I'm pushing myself to ask such questions (e.g., when do you expect to have LLMs that can emulate Richard Feynmann-level -of-text).  Also, I like the generality vs capability distinction. We already have a generalist (Gato) but we don't consider it to be an AGI (I think).

Your main two concerns seem to be that the terms are either vague or don't quite capture what we care about.

However it seems that those issues might be insurmountable, given that we don't know the precise nature of the future AI that has the properties we worry about.

Curated and popular this week
Relevant opportunities