Hide table of contents
This is a linkpost for https://arxiv.org/abs/2306.02519

(Crossposted to LessWrong)

Abstract

The linked paper is our submission to the Open Philanthropy AI Worldviews Contest. In it, we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%.

Specifically, we argue:

  • The bar is high: AGI as defined by the contest—something like AI that can perform nearly all valuable tasks at human cost or less—which we will call transformative AGI is a much higher bar than merely massive progress in AI, or even the unambiguous attainment of expensive superhuman AGI or cheap but uneven AGI.
  • Many steps are needed: The probability of transformative AGI by 2043 can be decomposed as the joint probability of a number of necessary steps, which we group into categories of software, hardware, and sociopolitical factors.
  • No step is guaranteed: For each step, we estimate a probability of success by 2043, conditional on prior steps being achieved. Many steps are quite constrained by the short timeline, and our estimates range from 16% to 95%.
  • Therefore, the odds are low: Multiplying the cascading conditional probabilities together, we estimate that transformative AGI by 2043 is 0.4% likely. Reaching >10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.

Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.

Executive summary

For AGI to do most human work for <$25/hr by 2043, many things must happen.

We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:

Event

Forecast

by 2043 or TAGI,
conditional on
prior steps

We invent algorithms for transformative AGI60%
We invent a way for AGIs to learn faster than humans40%
AGI inference costs drop below $25/hr (per human equivalent)16%
We invent and scale cheap, quality robots60%
We massively scale production of chips and power46%
We avoid derailment by human regulation70%
We avoid derailment by AI-caused delay90%
We avoid derailment from wars (e.g., China invades Taiwan)70%
We avoid derailment from pandemics90%
We avoid derailment from severe depressions95%
Joint odds0.4%

If you think our estimates are pessimistic, feel free to substitute your own here. You’ll find it difficult to arrive at odds above 10%.

Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.

So a good skeptic must ask: Is our framework fair?

There are two possible errors to beware of:

  • Did we neglect possible parallel paths to transformative AGI?
  • Did we hew toward unconditional probabilities rather than fully conditional probabilities?

We believe we are innocent of both sins.

Regarding failing to model parallel disjunctive paths:

  • We have chosen generic steps that don’t make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technology
  • One opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this belief

Regarding failing to really grapple with conditional probabilities:

  • Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will…
    • Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)
    • Have invented very cheap and efficient chips by today’s standards (our unconditional probability is substantially lower)
    • Have higher risks of disruption by regulation
    • Have higher risks of disruption by war
    • Have lower risks of disruption by natural pandemic
    • Have higher risks of disruption by engineered pandemic

Therefore, for the reasons above—namely, that transformative AGI is a very high bar (far higher than “mere” AGI) and many uncertain events must jointly occur—we are persuaded that the likelihood of transformative AGI by 2043 is <1%, a much lower number than we otherwise intuit. We nonetheless anticipate stunning advancements in AI over the next 20 years, and forecast substantially higher likelihoods of transformative AGI beyond 2043.

For details, read the full paper.

About the authors

This essay is jointly authored by Ari Allyn-Feuer and Ted Sanders. Below, we share our areas of expertise and track records of forecasting. Of course, credentials are no guarantee of accuracy. We share them not to appeal to our authority (plenty of experts are wrong), but to suggest that if it sounds like we’ve said something obviously wrong, it may merit a second look (or at least a compassionate understanding that not every argument can be explicitly addressed in an essay trying not to become a book).

Ari Allyn-Feuer

Areas of expertise

I am a decent expert in the complexity of biology and using computers to understand biology.

  • I earned a Ph.D. in Bioinformatics at the University of Michigan, where I spent years using ML methods to model the relationships between the genome, epigenome, and cellular and organismal functions. At graduation I had offers to work in the AI departments of three large pharmaceutical and biotechnology companies, plus a biological software company.
  • I have spent the last five years as an AI Engineer, later Product Manager, now Director of AI Product, in the AI department of GSK, an industry-leading AI group which uses cutting edge methods and hardware (including Cerebras units and work with quantum computing), is connected with leading academics in AI and the epigenome, and is particularly engaged in reinforcement learning research.

Track record of forecasting

While I don’t have Ted’s explicit formal credentials as a forecaster, I’ve issued some pretty important public correctives of then-dominant narratives:

  • I said in print on January 24, 2020 that due to its observed properties, the then-unnamed novel coronavirus spreading in Wuhan, China, had a significant chance of promptly going pandemic and killing tens of millions of humans. It subsequently did.
  • I said in print in June 2020 that it was an odds-on favorite for mRNA and adenovirus COVID-19 vaccines to prove highly effective and be deployed at scale in late 2020. They subsequently did and were.
  • I said in print in 2013 when the Hyperloop proposal was released that the technical approach of air bearings in overland vacuum tubes on scavenged rights of way wouldn’t work. Subsequently, despite having insisted they would work and spent millions of dollars on them, every Hyperloop company abandoned all three of these elements, and development of Hyperloops has largely ceased.
  • I said in print in 2016 that Level 4 self-driving cars would not be commercialized or near commercialization by 2021 due to the long tail of unusual situations, when several major car companies said they would. They subsequently were not.
  • I used my entire net worth and borrowing capacity to buy an abandoned mansion in 2011, and sold it seven years later for five times the price. 

Luck played a role in each of these predictions, and I have also made other predictions that didn’t pan out as well, but I hope my record reflects my decent calibration and genuine open-mindedness.

Ted Sanders

Areas of expertise

I am a decent expert in semiconductor technology and AI technology.

  • I earned a PhD in Applied Physics from Stanford, where I spent years researching semiconductor physics and the potential of new technologies to beat the 60 mV/dec limit of today's silicon transistor (e.g., magnetic computing, quantum computing, photonic computing, reversible computing, negative capacitance transistors, and other ideas). These years of research inform our perspective on the likelihood of hardware progress over the next 20 years.
  • After graduation, I had the opportunity to work at Intel R&D on next-gen computer chips, but instead, worked as a management consultant in the semiconductor industry and advised semiconductor CEOs on R&D prioritization and supply chain strategy. These years of work inform our perspective on the difficulty of rapidly scaling semiconductor production.
  • Today, I work on AGI technology as a research engineer at OpenAI, a company aiming to develop transformative AGI. This work informs our perspective on software progress needed for AGI. (Disclaimer: nothing in this essay reflects OpenAI’s beliefs or its non-public information.)

Track record of forecasting

I have a track record of success in forecasting competitions:

  • Top prize in SciCast technology forecasting tournament (15 out of ~10,000, ~$2,500 winnings)
  • Top Hypermind US NGDP forecaster in 2014 (1 out of ~1,000)
  • 1st place Stanford CME250 AI/ML Prediction Competition (1 of 73)
  • 2nd place ‘Let’s invent tomorrow’ Private Banking prediction market (2 out of ~100)
  • 2nd place DAGGRE Workshop competition (2 out of ~50)
  • 3rd place LG Display Futurecasting Tournament (3 out of 100+)
  • 4th Place SciCast conditional forecasting contest
  • 9th place DAGGRE Geopolitical Forecasting Competition
  • 30th place Replication Markets (~$1,000 winnings)
  • Winner of ~$4200 in the 2022 Hybrid Persuasion-Forecasting Tournament on existential risks (told ranking was “quite well”)

Each finish resulted from luck alongside skill, but in aggregate I hope my record reflects my decent calibration and genuine open-mindedness.

Discussion

We look forward to discussing our essay with you in the comments below. The more we learn from you, the more pleased we'll be.

If you disagree with our admittedly imperfect guesses, we kindly ask that you supply your own preferred probabilities (or framework modifications). It's easier to tear down than build up, and we'd love to hear how you think this analysis can be improved.

Comments92
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I don't think I understand the structure of this estimate, or else I might understand and just be skeptical of it. Here are some quick questions and points of skepticism.

Starting from the top, you say:

We estimate optimistically that there is a 60% chance that all the fundamental algorithmic improvements needed for AGI will be developed on a suitable timeline.

This section appears to be an estimate of all-things-considered feasibility of transformative AI, and draws extensively on evidence about how lots of things go wrong in practice when implementing complicated projects. But then in subsequent sections you talk about how even if we "succeed" at this step there is still a significant probability of failing because the algorithms don't work in a realistic amount of time.

Can you say what exactly you are assigning a 60% probability to, and why it's getting multiplied with ten other factors? Are you saying that there is a 40% chance that by 2043 AI algorithms couldn't yield AGI no matter how much serial time and compute they had available? (It seems surprising to claim that even by 2023!) Presumably not that, but what exactly are you giving a 60% chance?

(ETA: after reading later sectio... (read more)

Excellent comment; thank you for engaging in such detail. I'll respond piece by piece. I'll also try to highlight the things you think we believe but don't actually believe.

Section 1: Likelihood of AGI algorithms

"Can you say what exactly you are assigning a 60% probability to, and why it's getting multiplied with ten other factors? Are you saying that there is a 40% chance that by 2043 AI algorithms couldn't yield AGI no matter how much serial time and compute they had available? (It seems surprising to claim that even by 2023!) Presumably not that, but what exactly are you giving a 60% chance?

Yes, we assign a 40% chance that we don't have AI algorithms by 2043 capable of learning to do nearly any human task with realistic amounts of time and compute. Some things we probably agree on:

  • Progress has been promising and investment is rising.
  • Obviously the development of AI that can do AI research more cheaply than humans could be a huge accelerant, with the magnitude depending on the value-to-cost ratio. Already GPT-4 is accelerating my own software productivity, and future models over the next twenty years will no doubt be leagues better (as well as more efficient).
  • Obviously slow progre
... (read more)

Incidentally, I'm puzzled by your comment and others that suggest we might already have algorithms for AGI in 2023. Perhaps we're making different implicit assumptions of realistic compute vs infinite compute, or something else. To me, it feels clear we don't have the algorithms and data for AGI at present

I would guess that more or less anything done by current ML can be done by ML from 2013 but with much more compute and fiddling. So it's not at all clear to me whether existing algorithms are sufficient for AGI given enough compute, just as it wasn't clear in 2013. I don't have any idea what makes this clear to you.

Given that I feel like compute and algorithms mostly trade off, hopefully it's clear why I'm confused about what the 60% represents. But I'm happy for it to mean something like: it makes sense at all to compare AI performance vs brain performance, and expect them to be able to solve a similar range of tasks within 5-10 orders of magnitude of the same amount of compute.

But as we discuss in the essay, 20 years is not a long time, much easier problems are taking longer, and there's a long track record of AI scientists being overconfident about the pace of progress (counter

... (read more)

Are you saying that e.g. a war between China and Taiwan makes it impossible to build AGI? Or that serial time requirements make AGI impossible? Or that scaling chips means AGI is impossible?

C'mon Paul - please extend some principle of charity here. :)

You have repeatedly ascribed silly, impossible beliefs to us and I don't know why (to be fair, in this particular case you're just asking, not ascribing). Genuinely, man, I feel bad that our writing has either (a) given the impression that we believe such things or (b) given the impression that we're the type of people who'd believe such things.

Like, are these sincere questions? Is your mental model of us that there's a genuine uncertainty over whether we'll say "Yes, a war precludes AGI" vs "No, a war does preclude AGI."

To make it clear: No, of course a war between China and Taiwan does not make it impossible to build AGI by 2043. As our essay explicitly says.

Some things can go wrong and you can still get AGI by 2043. If you want to argue you can't build AGI if something goes wrong, that's a whole different story. So multiplying probabilities (even conditional probabilities) for none of these things happening doesn't seem right.

To mak... (read more)

8
Paul_Christiano
My point in asking "Are you assigning probabilities to a war making AGI impossible?" was to emphasize that I don't understand what 70% is a probability of, or why you are multiplying these numbers. I'm sorry if the rhetorical question caused confusion. My current understanding is that 0.7 is basically just the ratio (Probability of AGI before thinking explicitly about the prospect of war) / (Probability of AGI after thinking explicitly about prospect of war). This isn't really a separate event from the others in the list, it's just a consideration that lengthens timelines. It feels like it would also make sense to list other considerations that tend to shorten timelines. (I do think disruptions and weird events tend to make technological progress slower rather than faster, though I also think they tend to pull tiny probabilities up by adding uncertainty.)
3
Ted Sanders
I don't follow you here. Why is a floating point operation 1e5 bit erasures today? Why does a fp16 operation necessitate 16 bit erasures? As an example, if we have two 16-bit registers (A, B) and we do a multiplication to get (A, A*B), where is the 16 bits of information loss? (In any case, no real need to reply to this. As someone who has spent a lot of time thinking about the Landauer limit, my main takeaway is that it's more irrelevant than often supposed, and I suspect getting to the bottom of this rabbit hole is not going to yield much for us in terms of TAGI timelines.)
2
Ted Sanders
Yep. We're using the main definition supplied by Open Philanthropy, which I'll paraphrase as "nearly all human work at human cost or less by 2043." If the definition was more liberal, e.g., AGI as smart as humans, or AI causing world GDP to rise by >100%, we would have forecasted higher probabilities. We expect AI to get wildly more powerful over the next decades and wildly change the face of human life and work. The public is absolutely unprepared. We are very bullish on AI progress, and we think AI safety is an important, tractable, and neglected problem. Creating new entities with the potential to be more powerful than humanity is a scary, scary thing.
2
Ted Sanders
Interesting - this is perhaps another good crux between us. My impression is that existing robot bodies are not good enough to do most human jobs, even if we had human-level AGI today. Human bodies self-repair, need infrequent maintenance, last decades, have multi-modal high bandwidth sensors built in, and are incredibly energy efficient. One piece of evidence for this is how rare tele-operated robots are. There are plenty of generally intelligent humans around the world who would be happy to control robots for $1/hr, and yet they are not being employed to do so.
2
Paul_Christiano
I didn't mean to imply that human-level AGI could do human-level physical labor with existing robotics technology; I was using "powerful" to refer to a higher level of competence. I was using "intermediate levels" to refer to human-level AGI, and assuming it would need cheap human-like bodies. Though mostly this seems like a digression. As you mention elsewhere, the bigger crux is that it seems to me like automating R&D would radically shorten timelines to AGI and be amongst the most important considerations in forecasting AGI. (For this reason I don't often think about AGI timelines, especially not for this relatively extreme definition. Instead I think about transformative AI, or AI that is as economically impactful as a simulated human for $X, or something along those lines.)
1
Ted Sanders
What's an algorithm from 2013 that you think could yield AGI, if given enough compute? What would its inputs, outputs, and training look like? You're more informed than me here and I would be happy to learn more.
2
Ryan Greenblatt
I'm not sure I buy '2013 algorithms are literally enough', but it does seem very likely to me that in practice you get AGI very quickly (<2 years) if you give out GPUs which have (say) 10^50 FLOPS. (These GPUS are physically impossible, but I'm just supposing this to make the hypothetical easier. In particular, 2013 algorithms don't parallelize very well and I'm just supposing this away.) And, I think 2023 algorithms are literally enough with this amount of FLOP (perhaps with 90% probability). For a concrete story of how this could happen, let's imagine training a model with around 10^50 FLOP to predict all human data ever produced (say represented as uncompressed bytes and doing next token prediction) and simultaneously training with RL to play every game ever. We'll use the largest model we can get with this flop budget, probably well over 10^25 parameters. Then, you RL on various tasks, prompt the AI, or finetune on some data (as needed). This can be done with either 2013 or 2023 algorithms. I'm not sure if it's enough with 2013 algorithms (in particular, I'd be worried that the AI would be extremely smart but the elicitation technology wasn't there to get the AI to do anything useful). I'd put success with 2013 algos and this exact plan at 50%. It seems likely enough with 2023 algorithms (perhaps 80% chance of success). In 2013 this would look like training an LSTM. Deep RL was barely developed, but did exist. In 2023 this looks similar to GPT4 but scaled way up and trained on all source of data and trained to play games etc.
1
Ted Sanders
Let me replay my understanding to you, to see if I understand. You are predicting that... IF: * we gathered all files stored on hard drives * ...decompressed them into streams of bytes * ...trained a monstrous model to predict the next chunk in each stream * ...and also trained it to play every winnable computer game ever made THEN: * You are 50% confident we'd get AGI* using 2013 algos * You are 80% confident we'd get AGI* using 2023 algos WHERE: * *AGI means AI that is general; i.e., able to generalize to all sorts of data way outside its training distribution. Meaning: * It avoids overfitting on the data despite its massive parameter count. E.g., not just memorizing every file or brute forcing all the exploitable speedrunning bugs in a game that don't generalize to real-world understanding. * It can learn skills and tasks that are barely represented in the computer dataset but that real-life humans are nonetheless able to quickly understand and learn due to their general world models * It can made to develop planning, reasoning, and strategy skills not well represented by next-token prediction (e.g., it would learn to how write a draft, reflect on it, and edit it, even though it's never been trained to do that and has only been optimized to append single tokens in sequence) * It simultaneously avoids underfitting due to any regularization techniques used to avoid the above overfitting problems ASSUMING: * We don't train on data not stored on computers * We don't train on non-computer games (but not a big crux if you want to posit high fidelity basketball simulations, for example) * We don't train on games without win conditions (but not a big crux, as most have them) Is this a correct restatement of your prediction? And are your confidence levels for this resulting in AGI on the first try? Within ten tries? Within a year of trial and error? Within a decade of trial and error? (Rounding to the nearest tenth of a percent, I personal
2
Ryan Greenblatt
This seems like a pretty good description of this prediction. Your description misses needing a finishing step of doing some RL, prompting, and generally finetuning on the task of interest (similar to GPT4). But this isn't doing much of the work, so it's not a big deal. Additionally, this sort of finishing step wasn't really developed in 2013, so it seems less applicable to that version. I'm also assuming some iteration on hyperparameters and data manipulation etc. in keeping with the techniques used in the respective time periods. So, 'first try' isn't doing that much work here because you'll be iterating a bit in the same way that people generally iterate a bit (but you won't be doing novel research). My probabilities are for the 'first shot' but after you do some preliminary experiments to verify hyper-params etc. And with some iteration on the finetuning. There might be a non-trivial amount of work on the finetuning step also, I don't have a strong view here.
1
Ryan Greenblatt
My general view is 'if the compute is there, the AGI will come'. I'm going out on more of a limb with this exact plan and I'm much less confident in the plan than in this general principle.
1
Ryan Greenblatt
Here are some examples reasons why I think my high probabilities are plausible: * The training proposal I gave is pretty close to how models like GPT4 are trained. These models are pretty general and are quite strategic etc. Adding more FLOP makes a pretty big qualitative difference. * It doesn't seem to me like you have to generalize very far for this to succeed. I think existing data trains you to do basically everything humans can do. (See GPT4 and prompting) * Even if this proposal is massively inefficient, we're throwing an absurd amount of FLOP at it. * It seems like the story for why humans are intelligent looks reasonably similar to this story: have big, highly functional brains, learn to predict what you see, train to achieve various goals, generalize far. Perhaps you think humans intelligence is very unlikely ex-ante (<0.04% likely).
1
Ryan Greenblatt
It's worth noting that I think that GPT5 (with finetuning and scaffolding, etc.) is perhaps around 2% likely to be AGI. Of course, you'd need serious robotic infrastructure and much larger pool of GPUs to automate all labor.
1
Ted Sanders
Bingo. We didn't take the time to articulate it fully, but yeah you got it. We think it makes it easier to forecast these things separately rather than invisibly smushing them together into a smaller set of factors.  We are multiplying out factors. Not sure I follow you here.
1
Ted Sanders
Agree 100%. Our essay does exactly this, forecasting over a wide range of potential compute needs, before taking an expected value to arrive a single summary likelihood. Sounds like you think we should have ascribed more probability to lower ranges, which is a totally fair disagreement.
1
Ted Sanders
Pretty fair summary. 1e6, though, not 1e7. And honestly I could be pretty easily persuaded to go a bit lower by arguments such as: * Max firing rate of 100 Hz is not the informational content of the channel (that buys maybe 1 OOM) * Maybe a smaller DNN could be found, but wasn't * It might take a lot of computational neurons to simulate the I/O of a single synapse, but it also probably takes a lot of synapses to simulate the I/O of a single computational neuron Dropping our estimate by 1-2 OOMs would increase step 3 by 10%abs-20%abs. It wouldn't have much effect on later estimates, as they are already conditional on success in step 3.
1
Ted Sanders
Maybe, but maybe not, which is why we forecast a number below 100%. For example, it is very very rare to ever see a CEO hired with <2 years of experience, even if they are very intelligent and have read a lot of books and have watched a lot of interviews. Some reasons might be irrational or irrelevant, but surely some of it is real. A CEO job requires a large constellation of skills practiced and refined over many years. E.g., relationship building with customers, suppliers, shareholders, and employees. For an AGI to be installed as CEO of a corporation in under two years, human-level learning would not be enough - it would need to be superhuman in its ability to learn. Such superhuman learning could come from simulation (e.g., modeling and simulating how a potential human partner would react to various communication styles), come from parallelization (e.g., being installed as a manager in 1,000 companies and then compiling and sharing learnings across copies), or from something else. I agree that skills learned from reading or thinking or simulating could happen very fast. Skills requiring real-world feedback that is expensive, rare, or long-delayed would progress more slowly.
1
Victor Levoso
You seem to be missing the possibility of superhuman learning being from superhuman sample efficiency in the sense of requiring less feedback to aquire skills. Including actively experimenting in usefull directions more efectively.
-1
Ted Sanders
Nope, we didn't miss the possibility of AGIs being very sample efficient in their learning. We just don't think it's certain, which is why we forecast a number below 100%. Sounds like your estimate is higher than ours; however, that doesn't mean we missed the possibility.

Am I really the only person who thinks it's a bit crazy that we use this blobby comment thread as if it's the best way we have to organize disagreement/argumentation for audiences? I feel like we could almost certainly improve by using, e.g., a horizontal flow as is relatively standard in debate.[1]

With a generic example below:

To be clear, the commentary could still incorporate non-block/prose text.

Alternatively, people could use something like Kialo.com. But surely there has to be something better than this comment thread, in terms of 1) ease of determining where points go unrefuted, 2) ease of quickly tracing all responses in specific branches (rather than having to skim through the entire blob to find any related responses), and 3) seeing claims side-by-side, rather than having to scroll back and forth to see the full text. (Quoting definitely helps with this, though!)

  1. ^

    (Depending on the format: this is definitely standard in many policy debate leagues.)

1
Joe Rogero
How hard do you suppose it might be to use an AI to scrub the comments and generate something like this? It may be worth doing manually for some threads, even, but it's easier to get people to adopt if the debate already exists and only needs tweaking. There may even already exist software that accepts text as input and outputs a Kialo-like debate map (thank you for alerting me that Kialo exists, it's neat). 
2
Marcel D
Over the past few months I have occasionally tried getting LLMs to do some tasks related to argument mapping, but I actually don't think I've tried that specifically, and probably should. I'll make a note to myself to try here.
1
zchuang
But I don't think we could have predicted people would die into the comments like this. Usually comments have minimal engagement. There's a lesswrong debate format for posts but that's usually with a moderator and such. This seems spontaneous. 
2
Marcel D
Are your referring to this format on LessWrong? If so I can’t say I’m particularly impressed, as it still seems to suffer from the problems of linear dialogue vs. a branching structure (e.g., it is hard to see where points have been dropped, it is harder to trace specific lines of argument). But I don’t recall seeing this, so thanks for the flag. As for “I don’t think we could have predicted people…”, that’s missing my point(s). I’m partially saying “this comment thread seems like it should be a lesson/example of how text-blob comment-threads are inefficient in general.” However, even in this specific case Paul knew that he was laying out a multi-pronged criticism, and if the flow format existed he could have presented his claims that way, to make following the debate easier—assuming Ted would reply.  Ultimately, it just seems to me like it would be really logical to have a horizontal flow UI,[1] although I recognize I am a bit biased by my familiarity with such note taking methods from competitive debate. 1. ^ In theory it need not be as strictly horizontal as I lay out; it could be a series of vertically nested claims, kept largely within one column—where the idea is that instead of replying to the entire comment you can just reply to specific blocks in the original comment (e.g., accessible in a drop down at the end of a specific argument block rather than the end of the entire comment).
1
zchuang
I don't know. As someone who was/still is quite good at debating and connected to debating communities I would find a flow-centric comment thread bothersome and unhelpful for reading the dialogues. I quite like internet comments as is in this UI. 
2
Marcel D
I find this strange/curious. Is your preference more a matter of “Traditional interfaces have good features that a flowing interface would lack“ (or some other disadvantage to switching) or “The benefits of switching to a flowing interface would be relatively minor”? For example on the latter, do you not find it more difficult with the traditional UI to identify dropped arguments? Or suppose you are fairly knowledgeable about most of the topics but there’s just one specific branch of arguments you want to follow: do you find it easy to do that? (And more on the less-obvious side, do you think the current structure disincentivizes authors from deeply expanding on branches?) On the former, I do think that there are benefits to having less-structured text (e.g., introductions/summaries and conclusions) and that most argument mapping is way too formal/rigid with its structure, but I think these issues could be addressed in the format I have in mind.
1
zchuang
I asked other debaters/EAs intersecting and they agreed with my line of reasoning that it would be contrived and lead to poorly structured arguments. I can elaborate if you really want but I hesitate spending time to write this out because I'm behind on work and don't think it'll have any impact on anything to be honest.

I put little weight on this analysis because it seems like a central example of the multiple stage fallacy. But it does seem worth trying to identify clear example of the authors not accounting properly for conditionals. So here are three concrete criticisms (though note that these are based on skimming rather than close-reading the PDF):

  • A lot of the authors' analysis about the probability of war derailment is focused on Taiwan, which is currently a crucial pivot point. But conditional on chip production scaling up massively, Taiwan would likely be far less important.
  • If there is extensive regulation of AI, it will likely slow down both algorithmic and hardware progress. So conditional on the types of progress listed under events 1-5, the probability of extensive regulation is much lower than it would be otherwise.

The third criticism is more involved; I'll summarize it as "the authors are sometimes treating the different events as sequential in time, and sometimes sequential in logical flow". For example, the authors assign around 1% to events 1-5 happening before 2043. If they're correct, then conditioning on events 1-5 happening before 2043, they'll very likely only happen just be... (read more)

3
Ted Sanders
Great comment! Thanks especially for trying to point the actual stages going wrong, rather than hand-waving the multiple stage fallacy, which we all are of course well aware of. Replying to the points: From my POV, if events 1-5 have happened, then we have TAGI. It's already done. The derailments are not things that could happen after TAGI to return us to a pre-TAGI state. They are events that happen before TAGI and modify the estimates above. Yes, we think AGI will precede TAGI by quite some time, and therefore it's reasonable to talk about derailments of TAGI conditional on AGI.
6
richard_ngo
If events 1-5 constitute TAGI, and events 6-10 are conditional on AGI, and TAGI is very different from AGI, then you can't straightforwardly get an overall estimate by multiplying them together. E.g. as I discuss above, 0.3 seems like a reasonable estimate of P(derailment from wars) if the chip supply remains concentrated in Taiwan, but doesn't seem reasonable if the supply of chips is on track to be "massively scaled up".
4
Ted Sanders
I think that's a great criticism. Perhaps our conditional odds of Taiwan derailment are too high because we're too anchored to today's distribution of production. One clarification/correction to what I said above: I see the derailment events 6-10 as being conditional on us being on the path to TAGI had the derailments not occurred. So steps 1-5 might not have happened yet, but we are in a world where they will happen if the derailment does not occur. (So not really conditional on TAGI already occurring, and not necessarily conditional on AGI, but probably AGI is occurring in most of those on-the-path-to-TAGI scenarios.) Edit: More precisely, the cascade is: - Probability of us developing TAGI, assuming no derailments - Probability of us being derailed, conditional on otherwise being on track to develop TAGI without derailment

More precisely, the cascade is:
- Probability of us developing TAGI, assuming no derailments
- Probability of us being derailed, conditional on otherwise being on track to develop TAGI without derailment

Got it. As mentioned I disagree with your 0.7 war derailment. Upon further thought I don't necessarily disagree with your 0.7 "regulation derailment", but I think that in most cases where I'm talking to people about AI risk, I'd want to factor this out (because I typically want to make claims like "here's what happens if we don't do something about it"). 

Anyway, the "derailment" part isn't really the key disagreement here. The key disagreement is methodological. Here's one concrete alternative methodology which I think is better: a more symmetric model which involves three estimates:

  1. Probability of us developing TAGI, assuming that nothing extreme happens
  2. Probability of us being derailed, conditional on otherwise being on track to develop TAGI
  3. Probability of us being rerailed, conditional on otherwise not being on track to develop TAGI

By "rerailed" here I mean roughly "something as extreme as a derailment happens, but in a way which pushes us over the threshold to be on track toward... (read more)

5
Ted Sanders
Great comment. We didn't explicitly allocate probability to those scenarios, and if you do, you end up with much higher numbers. Very reasonable to do so.

This is a really impressive paper full of highly interesting arguments. I am enjoying reading it. That said, and I hope I'm not being too dismissive here, I have a strong suspicion that the central argument in this paper suffers from what Eliezer Yudkowsky calls the multiple stage fallacy,

The purported "Multiple-Stage Fallacy" is when you list multiple 'stages' that need to happen on the way to some final outcome, assign probabilities to each 'stage', multiply the probabilities together, and end up with a small final answer. The alleged problem is that you can do this to almost any kind of proposition by staring at it hard enough, including things that actually happen. [...]

Often, people neglect to consider disjunctive alternatives - there may be more than one way to reach a stage, so that not all the listed things need to happen... So if you list enough stages, you can drive the apparent probability of anything down to zero, even if you seem to be soliciting probabilities from the reader.

I think canonicalizing this as a Fallacy was very premature: Yudkowsky wrote his post based on two examples:

I wrote a response at the time, ending with:

To figure out whether it tends to help or hurt in general, it would be better to get a lot more examples. It turns out, though, that this is a very common tool for people to use when estimating the efficacy of a conversion funnel. You figure out what the steps are, get estimates for each step, and that gives you an overall conversion rate. These aren't perfect, but they do pretty well, and they do a lot better than trying to estimate a conversion rate without breaking it down.

Are there other examples of people using this method, yielding success or failure?

In the discussion people gave a few other examples of people using this sort of model:

... (read more)

Related to this, you can only multiply probabilities if they're independent, but I think a lot of the listed probabilities are positively correlated, which means the joint probability is higher than their product. For example, it seems to me that "AGI inference costs drop below $25/hr" and "We massively scale production of chips and power" are strongly correlated.

9
Ted Sanders
Agreed. Factors like AI progress, inference costs, and manufacturing scale needed are massively correlated. We discuss this in the paper. Our unconditional independent forecast of semiconductor production would be much, much lower than our conditional forecast of 46%, for example.
3
Ted Sanders
Thanks for the kind words. Regarding the multiple stage fallacy, we recognize it's a risk of a framework like this and go to some lengths explaining why we think our analysis does not suffer from it. (Namely, in the executive summary, the discussion, and the appendix "Why 0.4% might be less confident than it seems.") What are the disjunctive alternatives you think our framework misses?
7
Erich_Grunewald 🔸
Like Matthew, I think your paper is really interesting and impressive. Some issues I have with the methodology: * Your framework excludes some factors that could cause the overall probability to increase. * For example, I can think of ways that a great power conflict (over Taiwan, say) actually increases the chances of TAI. But your framework doesn't easily account for this. * You could have factored it in in all or some of the other stages, but I'm not sure you have, and it seems generally like this asymmetry (the "positive" effect of an event is factored into various other stages if at all, but the "negative" effect of the same event is estimated on its own conjunctive stage) will tend to give lower overall probabilities than it should. * It seems like you sometimes don't fully condition on preceding propositions. * You calculate a base rate of "10% chance of [depression] in the next 20 years", and write: "Conditional on being in a world on track toward transformative AGI, we estimate a ~0.5%/yr chance of depression, implying a ~10% chance in the next 20 years." * But this doesn't seem like fully conditioning on a world with TAI that is cheap, that can automate ~100% of human tasks, and that can be deployed at scale, and that is relatively unregulated. It seems like once that happens, and when it's nearly happening (e.g. AIs automate 20% of 2022-tasks), the probability of a severe depression should be way below historical base rates? * Similarly for "We quickly scale up semiconductor manufacturing and electrical generation", it seems like you don't fully condition on a world where we have TAI that is cheap, that can automate ~100% of human tasks, and that can operate cheap, high-quality robots, and that can probably be deployed to some fairly wide extent even if not (yet) to actually automate ~all human labour. * Like, your X100 is 100x as cost-effective as the H100, but that doesn't seem that far off what you'd get from by ju
2
Ted Sanders
Thanks! Totally reasonable to disagree with us on some of these forecasts - they're rough educated guesses, after all. We welcome others to contribute their own forecasts. I'm curious: What do you think are the rough odds that invasion of Taiwan increases the likelihood of TAGI by 2043? Agree wholeheartedly. In a world with scaled, cheap TAGI, things are going to look wildly different and it will be hard to predict what happens. Change could be a lot faster than what we're used to, and historical precedent and intuition might be relatively poor guides relative to first principles thinking. However, we feel somewhat more comfortable with our predictions prior to scaled, cheap AGI. Like, if it takes 3e30 - 3e35 operations to train an early AGI, then I don't think we can condition on that AGI accelerating us towards construction of the resources needed to generate 3e30 - 3e35 operations. It would be putting the cart before the horse. What we can (and try to) condition on are potential predecessors to that AGI; e.g., improved narrow AI or expensive human-level AGI. Both of those we have experience with today, which gives us more confidence that we won't get an insane productivity explosion in the physical construction of fabs and power plants. We could be wrong, of course, and we'll find out in 2043.
2
Erich_Grunewald 🔸
Maybe 20% that it increases the likelihood? Higher if war starts by 2030 or so, and near 0% if it starts in 2041 (but maybe >0% if it starts in 2042?). What number would you put on it, and how would you update your model if that number changed? I think what you're saying here is, "yes, we condition on such a world, but even in such a world these things won't be true for all of 2023-2043, but mainly only towards the latter years in that range". Is that right? I agree to some extent, but as you wrote, "transformative AGI is a much higher bar than merely massive progress in AI": I think in a lot of those previous years we'll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that we're headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on. Maybe you think the progression from today's systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?
3
Ted Sanders
No, I actually fully agree with you. I don't think progress will be discontinuous, and I do think we will see increasingly capable and useful systems by 2030 and 2035 that accelerate rates of progress. I think where we may differ is that: * I think the acceleration will likely be more "in line" than "out of line" with the exponential acceleration we already see from improving computer tools and specifically LLM computer tools (e.g., GitHub Copilot, GPT-4). Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s. * I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably won't lead to wild, extra orders of magnitude levels of progress. OpenAI has what, 400 employees? * Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. That's great, but honestly I don't think it's path-breaking. And sure, that's only the first order effect. If half the work was automated, we'd of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint. * Scenario two: Or, suppose we have human-cost human-level AGIs. I'm not convinced that would, to first order, change much either. There are millions of smart people on earth who aren't working on AI research now. We could hire them, but we don't. We're not limited by brains. We're limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesn't change much, because that wasn't the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human wor
2
Erich_Grunewald 🔸
Do you have any material on this? It sounds plausible to me but I couldn't find anything with a quick search. Supposing you take "progress" to mean something like GDP per capita or AI capabilities as measured on various benchmarks, I agree that it probably won't (though I wouldn't completely rule it out). But also, I don't think progress would need to jump by OOMs for the chances of a financial crisis large enough to derail transformative AGI to be drastically reduced. (To be clear, I don't think drastic self-improvement is necessary for this, and I expect to see something more like increasingly sophisticated versions of "we use AI to automate AI research/engineering".) I also think it's pretty likely that, if there is a financial crisis in these worlds, AI progress isn't noticeably impacted. If you look at papers published in various fields, patent applications, adoption of various IT technologies, numbers of researchers per capita -- none of these things seem to slow down in the wake of financial crises. Same thing for AI: I don't see any derailment from financial crises when looking at model sizes (both in terms of parameters and training compute), dataset sizes or chess program Elo. Maybe capital expenditure will decrease, and that might only start being really important once SOTA models are extremely expensive, but on the other hand: if there's anything in these worlds you want to keep investing in it's probably the technology that's headed towards full-blown AGI? Maybe I think 1 in 10 financial crises would substantially derail transformative AGI in these worlds, but it seems you think it's more like 1 in 2. Yeah, but why only focus on OAI? In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies. Ah, I think we have a crux here. I think that, if you could hire -- for the same price as a human -- a human-level AGI, that would in
1
Ted Sanders
Nope, it's just an unsubstantiated guess based on seeing what small teams can build today vs 30 years ago. Also based on the massive improvement in open-source libraries and tooling compared to then. Today's developers can work faster at higher levels of abstraction compared to folks back then. Absolutely agree. AI and AGI will likely provide immense economic value even before the threshold of transformative AGI is crossed. Still, supposing that AI research today is: * 50/50 mix of capital and labor * faces diminishing returns * and has elastic demand ...then even a 4x labor productivity boost may not be all that path-breaking when you zoom out enough. Things will speed up, surely, but they might won't create transformative AGI overnight. Even AGI researchers will need time and compute to do their experiments.

Not reading the paper, and not planning to engage in much discussion, and stating beliefs without justification, but briefly commenting since you asked readers to explain disagreement:

I think this framework is bad and the probabilities are far too low, e.g.:

  • We probably already have "algorithms for transformative AGI."
  • The straightforward meaning of "a way for AGIs to learn faster than humans" doesn't seem to be relevant (seems to be already achieved, seems to be unnecessary, seems to be missing the point); e.g. language models are trained faster than humans learn language (+ world-modeling), and AlphaGo Zero went from nothing to superhuman in three days. Maybe you explain this in the paper though.
  • GPT-4 inference is much cheaper than paying humans $25/hr to write similar content.
  • We probably already have enough chips for AGI by 2043 without further scaling up production.

Separately, note that "AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less" is a much higher bar than the-thing-we-should-be-paying-attention-to (which is more like takeover ability; see e.g. Kokotajlo).

9
Ted Sanders
Setting aside assessments of the probabilities (which are addressed in the paper), what do you think is bad about the framework? How would you suggest we improve it?

I mean, I don't think all of your conditions are necessary (e.g. "We invent a way for AGIs to learn faster than humans" and "We massively scale production of chips and power") and I think together they carve reality quite far from the joints, such that breaking the AGI question into these subquestions doesn't help you think more clearly [edit: e.g. because compute and algorithms largely trade off, so concepts like 'sufficient compute for AGI' or 'sufficient algorithms for AGI' aren't useful].

Thank you for the clarification. To me, it is not 100.0% guaranteed that AGIs will be able to rapidly parallelize all learning and it is not 100.0% guaranteed that we'll have enough chips by 2043. Therefore, I think it helps to assign probabilities to them. If you are 100.0% confident in their likelihood of occurrence, then you can of course remove those factors. We personally find it difficult to be so confident about the future.

I agree that the success of AlphaZero and GPT-4 are promising notes, but I don't think they imply a 100.0% likelihood that AGI, whatever it looks like, will learn just as fast on every task.

With AlphaZero in particular, fast reinforcement training is possible because (a) the game state can be efficiently modeled by a computer and (b) the reward can be efficiently computed by a computer.

In contrast, look at a task like self-driving. Despite massive investment, our self-driving AIs are learning more slowly than human teenagers. Part of the reason for this is that conditions (a) and (b) no longer hold. First, our simulations of reality are imperfect, and therefore fleets must be deployed to drive millions of miles. Second, calculating reward functions (i.e., ... (read more)

You start off saying that existing algorithms are not good enough to yield AGI (and you point to the hardness of self-driving cars as evidence) and fairly likely won't be good enough for 20 years. And also you claim that existing levels of compute would be a way too low to learn to drive even if we had human-level algorithms. Doesn't each of those factors on its own explain the difficulty of self-driving? How are you also using the difficulty of self-driving to independently argue for a third conjunctive source of difficulty?

Maybe another related question: can you make a forecast about human-level self-driving (e.g. similar accident rates vs speed tradeoffs to a tourist driving in a random US city) and explain its correlation with your forecast about human-level AI overall? If you think full self-driving is reasonably likely in the next 10 years, that superficially appears to undermine the way you are using it as evidence for very unlikely AGI in 20 years. Conversely, if you think self-driving is very unlikely in the next 10 years, then it would be easier for people to update their overall views about your forecasts after observing (or failing to observe) full self-driving.

I think ... (read more)

Maybe another related question: can you make a forecast about human-level self-driving (e.g. similar accident rates vs speed tradeoffs to a tourist driving in a random US city) and explain its correlation with your forecast about human-level AI overall?

Here are my forecasts of self-driving from 2018: https://www.tedsanders.com/on-self-driving-cars/

Five years later, I'm pretty happy with how my forecasts are looking. I predicted:

  • 100% that self-driving is solvable (looks correct)
  • 90% that self-driving cars will not be available for sale by 2025 (looks correct)
  • 90% that self-driving cars will debut as taxis years before sale to individuals (looks correct)
  • Rollout will be slow and done city-by-city, starting in the US (looks correct)

Today I regularly take Cruises around SF and it seems decently likely that self-driving taxis are on track to be widely deployed across the USA by 2030. Feels pretty probable, but still plenty of ways that it could be delayed or heterogenous (e.g., regulation, stalling progress, unit economics).

Plus, even wide robotaxi deployment doesn't mean human taxi drivers are rendered obsolete. Seems very plausible we operate for many many years with a mixed fleet, where... (read more)

3
Ted Sanders
This is not a claim we've made.
3
Paul_Christiano
That's fair, this was some inference that is probably not justified. To spell it out: you think brains are as effective as 1e20-1e21 flops. I claimed that humans use more than 1% of their brain when driving (e.g. our visual system is large and this seems like a typical task that engages the whole utility of the visual system during the high-stakes situations that dominate performance), but you didn't say this. I concluded (but you certainly didn't say) that a human-level algorithm for driving would not have much chance of succeeding using 1e14 flops.
7
Ted Sanders
I think you make a good argument and I'm open to changing my mind. I'm certainly no expert on visual processing in the human brain. Let me flesh out some of my thoughts here. On whether this framework would have yielded bad forecasts for self-driving: When we guess that brains use 1e20-1e21 FLOPS, and therefore that early AGIs might need 1e16-1e25, we're not making a claim about AGIs in general, or the most efficient AGI possible, but AGIs by 2043. We expect early AGIs to be horribly inefficient by later standards, and AGIs to get rapidly more efficient over time. AGI in 2035 will be less efficient than AGI in 2042 which will be less efficient than AGI in 2080. With that clarification, let's try to apply our logic to self-driving to see whether it bears weight. Supposing that self-driving needs 1% of human brainpower, or 1e18-1e19 FLOPS, and then similarly widen our uncertainty to 1e14-1e23 FLOPS, it might say yes, we'd be surprised but not stunned at 1e14 FLOPS being enough to drive (10% -> 100%). But, and I know my reasoning is motivated here, that actually seems kind of reasonable? Like, for the first decade and change of trying, 1e14 FLOPS actually was not enough to drive. Even now, it's beginning to be enough to drive, but still is wildly less sample efficient than human drivers and wildly worse at generalizing than human drivers. So it feels like if in 2010 we predicted self-driving would take 1e14-1e23 FLOPS, and then a time traveler from the future told us that actually it was 1e14 FLOPS, but it would take 13 years to get there, and actually would still be subhuman, then honestly that doesn't feel too shocking. It was the low end of the range, took many years, and still didn't quite match human performance. No doubt with more time and more training 1e14 FLOPS will become more and more capable. Just as we have little doubt that with more time AGIs will require fewer and fewer FLOPS to achieve human performance. So as I reflect on this framework applied

I like that you can interact with this. It makes understanding models so much easier.

Playing with the calculator, I see that the result is driven to a surprising degree by the likelihood that "Compute needed by AGI, relative to a human brain (1e20-1e21 FLOPS)" is <1/1,000x (i.e. the bottom two options).[1]

I think this shows that your conclusion is driven substantially by your choice to hardcode "1e20-1e21 FLOPS" specifically, and then to treat this figure as a reasonable proxy for what computation an AGI would need. (That is, you suggest ~~1x as the midpoint for "Compute needed by AGI, relative to... 1e20-1e21 FLOPS").

I think it's also a bit of an issue to call the variable "relative to a human brain (1e20-1e21 FLOPS)". Most users will read it as "relative to a human brain" while it's really "relative to 1e20-1e21 FLOPS", which is quite a specific take on what a human brain is achieving.

I value the fact that you argue for choosing this figure here. However, it seems like you're hardcoding in confidence that isn't warranted. Even from your own perspective, I'd guess that including your uncertainty over this figure would bump up the probability by a factor of 2-3, while it looks l... (read more)

My quick rebuttal is the flaw you seem to also acknowledge. These different factors that you calculate are not separate variables. They all likely influence the probabilities of each other. (greater capabilities can give rise to greater scaling of manufacturing, since people will want more of it. Greater intelligence can find better forms of efficiency, which means cheaper to run, etc.) This is how you can use probabilities to estimate almost anything is extremely improbable, as you noted.

2
Ted Sanders
Yep, that's admittedly a risk of a framework like this. We've tried our best to not to make that mistake, and have gone to some length explaining why we think we haven't. If you disagree, please help us by telling us which disjunctive paths you think we've missed or which probabilities you think we've underestimated.  As we asked in the post:
2
Prometheus
The primary issue I guess is that the normal rules don't easily apply here. We don't have good past data to make predictions, so every new requirement added introduces more complexity (and chaos), which might make it less accurate than using fewer variables. Thinking in terms of "all other factors remaining, what are the odds of x" sounds less accurate, but might be the only way to avoid being consumed by all potential variables. Like, ones you don't even mention that I could name include "US democracy breaksdown", "AIs hack the grid", "AIs break the internet/infect every interconnected device with malware", etc.* You could just keep adding more requirements until your probabilities drop to near 0, because it'll be difficult to say with much confidence that any of them are <.01 likely to occur, even though a lot of them probably are. It's probably better just to group several constraints together, and just give a probability that one or more of them occurs (example: "chance that recession/war/regulation/other slows or halts progress"), rather than trying to assess the likelihood of each one. Ordinarily, this wouldn't be a problem, but we don't have any data we could normally work with.   Here's a brief writeup of some agreements/disagreements I have with the individual contraints.   "We invent algorithms for transformative AGI" I don't know how this is only 60%. I'd place >.5 before 2030, let alone 2043. This is just guesswork, but we seem to be one or two breakthroughs away. "We invent a way for AGIs to learn faster than humans 40%" I don't really know what this means, why it's required, or why it's so low. I see in the paper that it mentions humans being sequential learners that takes years, but AIs don't seem to work that way. Imagine if GPT4 took years just to learn basic words. AIs also seem to already be able to learn faster than humans. They currently need more data, but less compute than a human brain. Computers can already process information much f

I'd have to think more carefully about the probabilities you came up with and the model for the headline number, but everything else you discuss is pretty consistent with my view. (I also did a PhD in post-silicon computing technology, but unlike Ted I went right into industry R&D afterwards, so I imagine I have a less synoptic view of things like supply chains. I'm a bit more optimistic, apparently—you assign <1% probability to novel computing technologies running global-scale AI by 2043, but I put down a full percent!)

The table "Examples transisto... (read more)

3
Muireall
(Here's my submission—I make some similar points but don't do as much to back them up. The direction is more like "someone should try taking this sort of thing into account"—so I'm glad you did!)

I appreciate the "We avoid derailment by…" sections – I think some forecasts have implicitly overly relied on a "business as usual" frame, and it's worth thinking about derailment.

In short, we expect the typical outcome of an invasion is that TSMC’s output going to AGI will be drastically reduced for many years. This will slow transformative AGI timelines by years, as TSMC is the #1 producer of advanced semiconductor chips and makes 100% of advanced AI chips

TSMC is obviously a market leader, but it seems weird to assume that TAI is infeasible without them?... (read more)

6
Ted Sanders
Thanks! We agree that a common mistake by forecasters is to equate low probability of derailment with negligible probability of derailment. The future is hard to predict, and we think it's worth taking tail risks seriously. We do not assume TAI is infeasible without TSMC. That would be a terrible reasoning error, and I apologize for giving you that impression. What we assume is that losing TSMC would likely delay TAI by a handful of years, as it would take: * Time for NVIDIA to bid on capacity from Samsung * Time for Samsung to figure out to what extent it could get out of prior contracted commitments * Time for NVIDIA and Samsung engineers to retune GPU designs for Samsung's fab design rules * Time to manufacture the masks and put the GPUs into production * Time to iron out early manufacturing and yield issues * Time to build new fabs to absorb the tsunami of demand from TSMC customers (like Apple) and scale up to NVIDIA's original TSMC volumes And on top of this, there would massive geopolitical uncertainty that would slow things like investments into new fabs, as companies wonder whether the conflict will escalate or evaporate (both of which massively change the investment case). What this might look like in reality will also depend on how close we are to transformative AGI.  Two example scenarios: Today, for example, NVIDIA is probably not going to outbid Apple (Apple makes ~$10B in PROFIT per month, which would evaporate if they were starved of chips). Or, imagine it's 2035 and NVIDIA is worth $10T and the semiconductor industry has been building fabs left and right to fuel the impending AGI boom. In such a world, where NVIDIA is the world's biggest chip designer, it may already dominate manufacturing on both Samsung and TSMC, meaning that if TSMC goes down, it cannot shift production to Samsung - because it already has production on Samsung. In any case, we fully agree TAI is feasible without TSMC. But we think losing TSMC delays things by a few
3
Ben_West🔸
Cool, I agree that, if most your probability mass is in the final few years before 2043, then a couple year delay is likely to push you over the 2043 deadline.

One thing I find deeply unconvincing about such a low probability (< 1%) and that does not require expert knowledge is that other ways to slice this would yield much higher estimates.

E.g. it seems difficult to justify a less than 10% probability that there will be really strong pressures to develop AGI and it seems similarly difficult to justify a less than 10% success probability given such an effort and what we now know.

7
Ted Sanders
I agree there will be really strong pressures to develop AGI. Already, many research groups are investing billions today (e.g., Google DeepMind, OpenAI, Anthropic). I'd assign 100% probability to this rather than <10%. I guess it depends on how many billions of dollars of investment qualify as "strong pressures." Well, our essay is an attempt to forecast the likelihood of success, given what we know. If you disagree with our estimates, would you care to supply your own? What conditional probabilities do you believe that result in a 10%+ chance of TAGI by 2043? As I asked in the post:

Thanks for posting this, Ted, it’s definitely made me think more about the potential barriers and the proper way to combine probability estimates.

One thing I was hoping you could clarify: In some of your comments and estimates, it seems like you are suggesting that it’s decently plausible(?)[1] we will “have AGI“ by 2043, it’s just that it won’t lead to transformative AGI before 2043 because the progress in robotics, semiconductors, and energy scaling will be too slow by 2043. However, it seems to me that once we have (expensive/physically-limited) AG... (read more)

5
Ted Sanders
Agree that: * The odds of AGI by 2043 are much, much higher than transformative AGI by 2043 * AGI will rapidly accelerate progress toward transformative AGI * The odds of transformative AGI by 2053 is higher than by 2043 We didn't explicitly forecast 2053 in the paper, just 2043 (0.4%) and 2100 (41%). If I had to guess without much thought I might go with 3%. It's a huge advantage to get 10 extra years to build fabs, make algorithms efficient, collect vast training sets, train from slow/expensive real-world feedback, and recover from rare setbacks. My mental model is some kind of S surve where progress in the short-term is extremely unlikely, progress in the medium-term is more likely, and after a while, the longer it takes to happen, the less likely it is to happen in any given year, as that suggests that some ingredient is still missing and hard to get. I think you may be right that twenty years is before the S of my S curve really kicks in. Twenty just feels so short with everything that needs to be solved and scaled. I'm much more open-minded about forty.
2
Marcel D
Interesting. Perhaps we have quite different interpretations of what AGI would be able to do with some set of compute/cost and time limitations. I haven't had the chance yet to read the relevant aspects of your paper (I will try to do so over the weekend), but I suspect that we have very cruxy disagreements about the ability of a high-cost AGI—and perhaps even pre-general AI that can still aid R&D—to help overcome barriers in robotics, semiconductor design, and possibly even aspects of AI algorithm design. Just to clarify, does your S-curve almost entirely rely on base rates of previous trends in technological development, or do you have a component in your model that says "there's some X% chance that conditional on the aforementioned progress (60% * 40%) we get intermediate/general AI that causes the chance of sufficiently rapid progress in everything else to be Y%, because AI could actually assist in the R&D and thus could have far greater returns to progress than most other technologies"?
4
Ted Sanders
No it's not just extrapolating base rates (that would be a big blunder). We assume that the development of proto-AGI or AGI will rapidly accelerate progress and investment, and our conditional forecasts are much more optimistic about progress than they would be otherwise. However, it's a totally fair to disagree with us on the degree of that acceleration. Even with superhuman AGI, for example, I don't think we're moving away from semiconductor transistors in less than 15 years. Of course, it really depends on how superhuman this superhuman intelligence would be. We discuss this more in the essay.

Your probabilities are not independent, your estimates mostly flow from a world model which seem to me to be flatly and clearly wrong.

The plainest examples seem to be assigning

We invent a way for AGIs to learn faster than humans40%
AGI inference costs drop below $25/hr (per human equivalent)16%

despite current models learning vastly faster than humans (training time of LLMs is not a human lifetime, and covers vastly more data) and the current nearing AGI and inference being dramatically cheaper and plummeting with algorithmic improvements. There is a general... (read more)

1
Ted Sanders
Some models learning some things faster than humans does not imply AGI  will learn all things faster than humans. Self-driving cars, for example, are taking much longer to learn to drive than teenagers do.
2
plex
Disagree with example. Human teenagers spend quite a few years learning object recognition and other skills necessary for driving before driving, and I'd bet at good odds that a end-to-end training run of a self-driving car network is shorter than even the driving lessons a teenager goes through to become proficient at a similar level to the car. Designing the training framework, no, but the comparator there is evolution's millions of years so that doesn't buy you much.
1
Ted Sanders
The end-to-end training run is not what makes learning slow. It's the iterative reinforcement learning process of deploying in an environment, gathering data, training on that data, and then redeploying with a new data collection strategy, etc. It's a mistake, I think, to focus only the narrow task of updating model weights and omit the critical task of iterative data collection (i.e., reinforcement learning).

For AGI to do most human work for <$25/hr by 2043, many things must happen.

I don't think this is necessarily the right metric, for the same reason that I think the following statement doesn't hold:

transformative AGI is a much higher bar than... even the unambiguous attainment of expensive superhuman AGI

Basically, while the contest rules do say, "By 'AGI' we mean something like 'AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less'" they then go on to clarify, "Wha... (read more)

It feels like you're double counting a lot of the categories of derailment at first glance? There's a highly conjunctive story of each of the derailments that makes me suspicious of multiplying them together as if they're conjunctive. I'm also confused as to how you're calculating the disjunctive probabilities because on page 78 you put "Conditional on being on a trajectory to transformative AGI, we forecast a 40% chance of severe war erupting by 2042". However, this doesn't seem to be an argument for derailment, it seems more likely it'd be an argument for race dynamics increasing? 

4
Ted Sanders
To me, the odds of pandemic and wars and regulation feel decently independent, but perhaps I haven't thought deeply enough about pandemics causing depressions causing wars, or wars leading to engineered pandemics being released, etc. Looking at the past 100ish years of history, the worst wars (world war I & II), the worst pandemics (various flus, COVID19), and the worst recessions (great depression, great recession) all seem fairly independent. In any case, we tried our best to come up with probabilities of each, conditional on the others not occurring. What probabilities would you assign? As we asked in our post:
5
zchuang
Yeah I think I would just bin all of delay into one bucket such that they are not independent. For instance, the causal chain of WWI, Great Depression, and WWII seem quite contingent upon one another. I'll chew on how the binning works but nonetheless really appreciate this piece of work and it's really easy to read and understand -- as well as internally well reasoned. Didn't mean to come off too harsh.
5
Ted Sanders
Not harsh at all; I genuinely appreciate the discussion. If there are good criticisms of our approach, I hope that we absorb them into an improved model rather than entrenching ourselves against them. The issue I see with grouping these factors is then how do we figure what forecast to make for the collective group? The intuitive approach I’d take is to look at the rates of pandemics, world wars, etc. So it feels like we’d still be basing the estimate on mostly independent considerations even if we smush the final product together at the end. Seems like a tricky forecasting problem in general. You don’t want a model with too many finicky specific scenarios, but you also don’t want amorphous uninterpretable blobs that arise from irreversibly blending many ingredients together. A model with 1,000 parameters isn’t going to convince anyone and neither will a model with just 1. We tried to keep to a manageable range of 10 overall factors, backed by a few dozen subfactors. But definitely room to move in either direction.

Cool! I mostly like your decomposition/framing. A major nitpick is that robotics doesn't matter so much: dispatching to human actuators is probably cheap and easy, like listing mturk jobs or persuasion/manipulation. 

Agreed. AGI can have great influence in the world just by dispatching humans.

But by the definition of transformative AGI that we use - i.e., that AGI is able to do nearly all human jobs - I don't think it's fair to equate "doing a job" with "hiring someone else to do the job." To me, It would be a little silly to say "all human work has been automated" and only mean "the CEO is an AGI, but yeah everyone still has to go to work."

Of course, if you don't think robotics is necessary for transformative AGI, then you are welcome to remove the factor (or equivalently set it to 100%). In that case, our prediction would still be <1%.

While I agree with many of the object-level criticisms of various priors that seem to be out of touch of current state of ML, I would like to instead make precise a certain obvious flaw in the methodology of the paper which was pointed out several times and which you seem to be unjustifiably dismissive of.

tldr: when playing Baysian inference it is crucial to be cognizant that regardless of how certain your priors are the more conditional steps involved in your model the less credence you should give to the overall prediction.

As for the case at hand, it is ... (read more)

Model error higher than 1%?

5
Ted Sanders
Three questions for you that would help us improve our model: * What important error do you think is made by our model? * What modification would you propose to address the error? * What impact do you think your modification would have on the resultant forecast?
2
Prometheus
I think he's asking if your margin of error is >.01
4
Ted Sanders
What is a margin of error, here, exactly? The event will either happen (1) or not (0). The 0.4% already reflects our uncertainty. In general, I don't think it makes mathematical sense to discuss probabilities of probabilities.* *although of course it can make sense to describe sensitivities of probabilities to new information coming in
2
NickLaing
It would be far far higher of course! With that many variables? Think about the uncertainty we ascribe to cost-effectiveness analysis with far less variables and far better evidence. Even calculating the error her would be close to impossible  95% confidence interval 0.1% to 50%? (Kind of joking here, but it might be in that range)
8
Ted Sanders
Confidence intervals over probabilities don’t make much sense to me. The probability itself is already the confidence interval over the binary domain [event happens, event doesn’t happen]. I guess to me the idea of confidence intervals over probabilities implies two different kinds of probabilities. E.g., a reducible flavor and an irreducible flavor. I don’t see what a two-tiered system of probability adds, exactly.
3
Davidmanheim
This was an extensive debate in the 1980s and 90s between Judea Pearl, Dempster-Schafer, and a few others. I think it's trivially true, however, that in the probability centric view you espouse, it can be helpful to track second order uncertainty, and reducible versus irreducible uncertainty is critical for VoI analysis.
3
Ted Sanders
What is Vol analysis?
3
Davidmanheim
Value of Information Here's my brief intro post about it: https://forum.effectivealtruism.org/posts/8w2hNT5WtDMzoaGuy/when-to-find-more-information-a-short-explanation And for more on the debates about second-order probabilities and confidence intervals, and why Pearl says you don't need them, you should just use a Bayesian Network, see his paper here: https://core.ac.uk/download/pdf/82281071.pdf 
Curated and popular this week
Relevant opportunities