This is one of two posts I’m putting up today on how little economic theory alone has to say about the effects full automation would have on familiar economic variables.
The other is “The ambiguous effect of full automation on wages”.
For a more interesting but more mathematically complex illustration of the dynamic I'm trying to illustrate in this post, see Hyperbolic goods, exponential GDP.
Introduction
A lot of us are wondering what impact AI will have on global GDP growth (or would have if fully aligned and minimally regulated, in a world not destroyed by conflict). People have occasionally asked for my opinion on the question ever since I first wrote a literature review on the question five years ago—in fact before that, which is one of the reasons I wrote it! My answer has changed over time in many ways, but the outline remains similar.
- It seems much more likely than not to me that advanced enough AI would eventually result in GWP growth above the fastest rates of sustained “catch-up growth” we have ever seen, namely the long stretches of ~10% growth seen over the last century in several East Asian countries, in which growth was essentially bottlenecked by capital accumulation rather than technological progress.
- I think that radically faster growth rates are also plausible. Most growth models predict that if capital could substitute well enough for labor, the growth rate would rise ~indefinitely, and none of the arguments (that I’ve come across to date) for ruling this out seem strong. But I also don’t think it makes sense to be confident that this will happen (given the work on this that I’ve come across to date), since there are some reasons why the extrapolation of ever faster GDP growth might break down.
The example below is an attempt to succinctly communicate one of those reasons: namely that GDP is, on inspection, a bizarrely constructed object with no necessary connection to any intuitive notion of technological capacity. Many people are already aware of this on some level, but the disconnect seems much bigger to me than typically appreciated. I intuitively don’t think this is a very strong reason to expect slow GDP growth given full automation, to be clear, but I do think it’s real and underappreciated.[1]
Example
Assume for simplicity that everything produced is consumed immediately.
We produce different kinds of goods. A good’s share at a given time is the fraction of all spending that is spent on that good. The growth rate of GDP at a given time is the weighted average of the growth rates of the good-quantities at that time, with each good’s weight given by its share. (We are talking about real GDP growth, using chain-weighting. To understand why it is standard to define GDP growth this way, Whelan (2001) offers a good summary.)
Here is a simple illustration of how, even if our productive capacity greatly accelerates on a common-sense definition, GDP growth can slow.
Suppose there are two goods, the population is fixed and its size is normalized to 1, and everyone has the utility function
Observe that the marginal utility of good 2 always equals 1, and that the marginal utility of good 1 diminishes, equaling 1 when Early in time, only good 1 has been invented, and we produce 2% more of good 1 each year. This is the rate of GDP growth.
Step 1
In the year when , a technological advance is made. Call this year . This advance allows us to fully automate the production of every good we would ever have produced without the advance--thus greatly increasing the rate at which we can increase production of good 1--and beyond that, it yields the invention (or allows for the production) of good 2.
Suppose that at first we are equally productive at making each good, so their prices are equal. Each person’s budget constraint is also the production possibilities frontier: . So at first demand for good 1 remains equal to 1, and demand for good 2 is 0.
Until , our productivity in both sectors grows at 30% per year. The prices of the goods always remain equal, but in year t the budget constraint is .
Observe that demand for good 1 stays fixed at , with all marginal productive capacity being put into making good 2. So from to , we have . So .
Step 2
From onward, our productivity at making good 1 grows at 100% per year, but our productivity at making good 2 stops growing.
Good 1's share stays constant at 1/(1+66) = 1/67. That is, the quantity of good 1 bought each year grows at 100% per year, without causing us to raise or lower the quantity of good 2 bought each year. This follows from the fact that every time its quantity doubles, its marginal utility halves (), so if we were indifferent between increasing spending on good 1 and on good 2 before
i) the quantity of good 1 and
ii) the amount more of good 1 we could produce by foregoing a unit of good 2
both doubled, then we are indifferent afterward as well.
So from now on GDP grows at 100%/67 = 1.5%/yr.
Discussion
Is this related to the point that GDP growth is often said to be “mismeasured” when new goods are introduced?
No. At least, the point made by the example above is unrelated to the issue people are usually referring to when they talk about GDP growth mismeasurement when new goods are introduced. The issue typically raised is that, if a good is introduced at a price below the highest price at which people would have been willing to start buying it, we do not count the consumer surplus associated with those initial purchases of the good, but implicitly assume that consumers value the new good at precisely its initial price. But the above example is set up so that, when good 2 is introduced, it is just expensive enough that the quantity demanded is zero.
This seems crazy. If good 2 had never been introduced, annual GDP growth would have been 2% until t=0, then 30% until t=14, and then 100% onward, not 1.5%. And the existence of good 2 only makes people better off, in fact much better off. What’s going wrong?
Yes, this example illustrates more generally that changes in consumption that make everyone much better off can slow GDP growth. This is a fact that economists usually half-learn at the beginning of grad school, buried deep in the details of some week on inflation measurement, and then forget about! My own view is that this reveals that assuming that GDP will track anything that matters is indeed pretty crazy, except in domains where this has been verified (or moderate extrapolations from these domains).
If this is true, why don’t economists all treat GDP as a meaningless variable with no connection to anything that matters?
I think there are three main reasons.
- It’s more clearly useful for tracking “what matters” in the context of short-term booms and busts, when the kinds of goods available are roughly fixed and GDP fluctuations are mainly due to fluctuations in employment.
- In conversation it’s clear that very many economists, including many growth theorists, are not aware of the weakness of the theoretical basis for assuming that GDP will track anything meaningful in the longer run.
- In some longer-run contexts GDP has been found historically to correlate with other measures of welfare or productive capacity.
My point is that this third point is just a brute empirical fact, relying on facts about e.g. the ratio between productivity growth on goods introduced long ago and productivity growth on recently introduced goods, that may not be maintained in a very different future technological era.
If productivity at making good 1 grows superexponentially, GDP still grows superexponentially; the growth rate is just always 1/67 what it would have been without good 2 “getting in the way”. So if I think growth will be superexponential for a long time, shouldn’t I still think GDP growth will be superexponential?
In this example, where utility is logarithmic in good 1, the presence of good 2 knocks the growth rate down by a constant multiple. But if utility in good 1 asymptotes to an upper bound, e.g. if , then the GDP growth rate falls to zero in this example even if the growth rate of good 1 is hyperbolic. (Here is an example spelling this logic out.) Indeed, my guess is that people’s utility in the goods available today does have an upper asymptote, that new goods in the future could raise our utility above that bound, and that this cycle has been played out many times already.
Historically, if we look back to the Malthusian past, long-run GDP growth has been superexponential. GDP growth has been only exponential recently, due to the fact that we don’t turn all our productive capacity into having as many children as possible. So shouldn’t we expect that, despite whatever curiosity is going on with the example, GDP will return to being superexponential following full AGI + robotics?
I think it could, but I don’t think this follows from the analogy to growth in a Malthusian era. Back then, in some sense, we were only producing one “good”—say, calories, or the bundle of calories and clothing and so on needed to keep a person alive. Over the years we produced ever more copies of it to spread across an ever larger population, without dramatically shifting our consumption over to a new good which might exhibit slower productivity growth.[2]
Isn’t this just the classic point about “Baumol’s cost disease”?
The points are closely related but distinct. The Baumol point is that among a set of already existing goods which we don’t see as very substitutable, GDP growth can be pulled down arbitrarily by the slow-growing goods. This is sometimes raised as a reason to doubt that even really impressive automation will yield fast GDP growth, as long as it falls even a little bit short of full automation. The point I’m making here is that even if we fully automate production, and even if the quantity of every good existing today then grows arbitrarily quickly, we might create new goods as well. Once we do so, if the production of old goods grows quickly while our production of the new goods doesn’t, GDP growth may be slow.
Hopefully it is clear enough how the example can be extended so that (i) eventually productivity at good 2 grows quickly as well; (ii) by the time we are in that part of the utility function, utility is (say) logarithmic rather than linear in good 2; (iii) a third good is then introduced, slow-growing for a period early in time; and so on indefinitely, so that every good is eventually fast-growing but GDP never is. If not, hopefully the paper will make it clearer!
More on the motivation
This post doesn’t offer arguments against (or for) “radical AI scenarios” in some intuitive sense. It just offers an argument against the idea that “radical AI scenarios” (even given alignment etc.) must yield “explosive GDP growth”. I think the weakness of this link is worth emphasizing for at least two reasons.
First, some people are using forecasts of AI’s ability to accelerate growth as proxies for forecasts of AI’s ability to be geopolitically disruptive, lock in a utopian future, or pose existential risk. To my understanding, this proxy reasoning has been a primary motivation for some of the people who have asked what I thought about AI and growth, for Open Philanthropy’s work on AI and growth, and for various surveys of economists on AI and growth, including one in progress from the Forecasting Research Institute (on which I’m now collaborating). To some extent I think this proxying makes sense: impact on GDP growth under ideal conditions is a much more concrete variable to model and make predictions about than, say, impact on the value of the future, and I don’t think the two are totally uncorrelated. But I used to think and argue that they were much more tightly linked than I would now say.
Second, I expect that economic data can be very useful in AI scenario planning. This makes it all the more important not to anchor on the wrong data.
To elaborate: in a slow-moving world, policymakers can respond to economic events as they occur; but if “the world” will soon move much more quickly, and legislative/regulatory processes will be sped up less than other important processes, timely responses to developments will only be possible if a tree of conditional policy responses has been established in advance. So I think it would be valuable if more work were done exploring AI-related regulation or redistribution that kicks in conditional on economic events (among other things). For instance, if some want to pass a UBI[3] because they anticipate that wages will soon fall and/or that there will soon be a lot of productive robots to tax, and others object that the UBI is a bad idea before the wages have fallen or the robots have arrived, we may get consensus on passing a UBI that only starts scaling up if, say, the capital share crosses 80%. Likewise, people might agree that AI will be dangerous if extremely powerful, but some might not want to regulate it currently, since they doubt that it will soon be so powerful and see premature regulation as costly. A natural compromise would be regulation that comes into force only once AI capabilities cross some line. Regulation can be made conditional on features of the AI model (as proposed e.g. by Biden’s executive order and California’s SB 1047), but well-chosen economic indices might track “AI capabilities” in a sense more directly tied to the social and geopolitical implications of AI we actually care about for some purposes.[4] Badly chosen economic indices might not.
- ^
As anyone who knows me knows, this point has been a hobby-horse of mine for a while, but one thing after another has prevented me from writing it up properly. I’m now writing the “proper” version as part of a paper with Chad Jones (not mainly about this point), so hopefully it will actually get done. But in the meantime maybe this example will be helpful.
- ^
This isn’t exactly true if we count new farming implements and so on as “new goods”, rather than productivity improvements with respect to the same good of “calories”. But I would argue that there is at least a much wider margin for “growth via more copies of the same old goods” in a Malthusian setting than on the growth path chosen by a utility-maximizing fixed population.
- ^
Of some form; e.g. in the US perhaps a large expansion to the earned income tax credit.
- ^
I for one have been surprised by how capable AI models have managed to get without yet impacting much of anything that matters!
I just googled “Phil Trammell new product varieties from AI could introduce ambiguities in accounting for GDP” because I wanted something to link to, and saw you'd posted this. Thanks for writing it up!
No worries, good to hear!
(FYI though I think we've chatted about several new varieties issues that I think could come up in the event of a big change in "growth mode", and this post is just about one of them.)
Awhile back John Wentworth wrote the related essay What Do GDP Growth Curves Really Mean?, where he pointed out that you wouldn't be able to tell that AI takeoff was boosting the economy just by looking at GDP growth data simply because of the way GDP is calculated (emphasis mine):
I do agree with your remark that
but for the GDP case I don't actually have any good alternative suggestions, and am curious if others do.
Thanks for pointing me to that post! It’s getting at something very similar.
I should look through the comments there, but briefly, I don’t agree with his idea that
If next year we came out with a way to make caviar much more cheaply, and a car that runs on caviar, GDP might balloon in this-year prices without the world looking crazy to us. One thing I’ve started on recently is an attempt to come up with a good alternative suggestion, but I’m still mosty at the stage of reading and thinking (and asking o1).
Not sure this is disagreement per se, but I think the surprising behavior of GDP in your model is almost entirely due to the shape of the utility function and doesn't have much to do with either (1) the distinction between existing vs new products or (2) automation. In other words, I think this is still basically Baumol, although I admit to a large extent I'm just arguing here about preferred conceptual framing rather than facts.
Consider modifying your models as follows (which I presume makes it more like a traditional Baumol model):
Using your same utility function u(x1,x2)=log(x1)+x2, production is fully devoted to Good 1 for all times t<0, and during that time GDP is growing at r1. Then at time t=0, it becomes worthwhile to start producing Good 2. For t>0, the productivity growth rate of Good 1 remains much higher than the that of Good 2 (r1≫r2), and indeed the number of units of Good 1 produced grows exponentially faster than Good 2:
x1(t)=e(r1−r2)t , x2(t)=er2t−1
Nonetheless, the marginal value of Good 1 plummets due to the log in the utility function. Specifically, the relative price of Good 1 falls exponentially in time, p1(t)/p2(t)=e−(r1−r2)t where pi(t):=dfdxi(t), as does Good 1's price-weighted fraction of production:
p1(t)x1(t)p1(t)x1(t)+p2(t)x2(t)=e−r2t
GDP growth falls from r1 and exponentially asymptotes to r2 for large t.
Two points on the model:
Ok, so then what are the take-aways for AI? By cleanly separating the utility-function effect from shocks to productivity, I think this is reason for us to believe that the past is a reasonable guide to the future. Yes, there could be weird kinks in our utility function, but in terms of revealing kinks there's not much reason to think that AI-induced productivity gains will be importantly different than productivity gains from the past.
What quantity should we measure if not GDP?
I think there's just no getting around the fact that the kind of growth we care about is unavoidably wrapped up in our utility function. But as long as some fraction of people want to build Jupiter brains and explore Andromeda enough that they don't devote ~all their efforts to goals that are intrinsically upper bounded, I expect AGI to lead to rapid real GDP growth (although it does likely eventually end with light-speed limits or whatever).
If growth were slow post-singularity, I think that would imply something pretty weird about human utility in this universe (or rather, the utility of the beings controlling the economy). There could of course still be crazy things happening like wild increases in energy usage at the same time, but this isn't too different than how wild the existence of nanometer-scale transistors are relative to pre-industrial civilization. If you care about those crazy things independent of GDP (which is a measure of how fast the world overall is getting what it wants), you should probably just measure them directly, e.g., energy usage, planets colonized, etc.
Hey Jess, thanks for the thoughtful comments.
On whether "this is still basically Baumol"
If we make that one tiny tweak and say that good 2 was out there to be made all along, just too expensive to be demanded, then yes, it's the same! That was the goal of the example: to introduce a Baumol-like effect in a way so similar to how everyone agrees Baumol effects have played out historically that it's really easy to see what's going on.
I'm happy to say it's basically Baumol. The distinction I think is worth emphasizing here is that, when people say "Baumol effects could slow down the growth effects of AI", they are usually--I think always, in my experience--pointing to the fact that if
then GDP growth won't speed up much. This then invites the response that, when we look around, there doesn't seem to be any human-only good strongly satisfying (1) and (2). My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn't have been on track to produce at all. This has not been widely appreciated.
On whether there is any reason to expect the productivity acceleration to coincide with the "kink in the utility function"
Here I think I disagree with you more substantively, though maybe the disagreement stems from the small framing point above.
If indeed "good 2" were always out there, just waiting for its price to fall, and if a technology were coming that would just replace all our existing workers, factory parts, and innovators with versions that operate more quickly in equal proportion--so that we move along the same paths of technology and the quantity of each good produced, but more quickly--then I agree that the past would be a good guide to the future, and GDP growth would reliably rise a lot. The only way it wouldn't would be if the goods we were just about to start producing anyway were like good 2, featuring less steeply diminishing marginal utility but way slower productivity growth, so that the rise in productivity growth across the board coincidentally turned up at the same time as the "utility function kink".
But if the technological advances that are allowing us to automate the production of everything people would ever be able to produce without the advances are also what allow for the invention of goods like good 2, it wouldn't be a coincidence. I.e. presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random "utility function kink") but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of "utility function kink" that has an arbitrary effect on GDP growth). The idea that we're soon producing very different products than we otherwise ever would have, whose productivity is growing at very different rates, seems all the more likely to me when we remember that even at 30% growth we're soon in an economy several orders of magnitude bigger: the kink just needs to show up somewhere, not anywhere near the current margin.
To reiterate what I noted at the beginning though, I'd be surprised if the ambiguous second effect single-handedly outweighed the unambiguously positive first effect. And it could just as well amplify it, if "good 2" exhibits faster than average productivity growth.
Thanks!
OK but this key feature of not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive. I think this is in conflict with the normal understanding of what "automation" means! Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle, even if the additional efficiency means actual spending on a specific product goes from 0 to 1. And as long as we could produce a little of Good 2 pre-automation, the utility function in your model implies that the spending in the economy would eventually be dominated by Good 2 (and hence GDP growth rates would be set by the growth in productivity of Good 2) even without full automation (unless the ratio of Good-1 and Good-2 productivity is growing superexponentially in time).
What kind product would we be unable to produce without full automation, even given arbitrary time to grow? Off the top of my head I can only think of something really ad-hoc like "artisanal human paintings depicting the real-world otherwise-fully-autonomous economy".
That's basically what makes me think that "the answer is already in our utility function", which we could productively introspect on, rather than some empirical uncertainty about what products full automation will introduce.
I'm not sure what the best precise math statement to make here is, but I suspect that at least for "separable" utility functions of the form u(x1,…,xN)=∑nun(xn) you need either a dramatic difference in diminishing returns for the un (e.g., log vs. linear as in your model) or you need a super dramatic difference in the post-full-automation productivity growth curves (e.g., one grows exponentially and the other grows superexponentially) that is absent pre-automation. (I don't think it's enough that the productivities grow at different rates post-automation.) So I still think we can extract this from our utility function without knowing much about the future, although maybe there's a concrete model that would show that's wrong.
Okay, I'm happy to change the title to (a more concise version of) "the ambiguous effect of a technological advancement that achieves full automation, and also allows new goods to be introduced on GDP growth" if that would resolve the disagreement. [Update: have just changed the title and a few words of the body text; let me know.]
On the second point: in practice I don't think we have additively separable utility, and I don't know what you mean by "extracting this from our utility function". But anyway, if I'm understanding you, that is wrong: if your utility function is additively separable with an upper bound in each good, say u(x)=∑nmax(0,1−1/xn), a technological shift can yield superexponential growth in the quantity of each n but exponential GDP growth. I'll write up a note on how that works this evening if that would be helpful, but I was hoping this post could just be a maximally simple illustration of the more limited point that Baumol-like effects can slow growth even past the point of full automation.
Oh yea, I didn't mind the title at all (although I do think it's usefully more precise now :)
Agreed on additively separable utility being unrealistic. My point (which wasn't clearly spelled out) was not that GDP growth and unit production can't look dramatically. (We already see that in individual products like transistors (>> GDP) and rain dances (<< GDP).) It was that post-full-automation isn't crucially different than pre-full-automation unless you make some imo pretty extreme assumptions to distinguish them.
By "extracting this from our utility function", I just mean my vague claim that, insofar as we are uncertain about GDP growth post-full-automation, understanding better the sorts of things people and superhuman intelligences want will reduce that uncertainty more than learning about the non-extreme features of future productivity heterogeneity (although both do matter if extreme enough). But I'm being so vague here that it's hard to argue against.
Ok, fair enough--thanks for getting me to make it clearer :). So I guess the disagreement (if any remains, post-retitling/etc) is just about how plausible we think it is that the technological advances that accompany full automation will be accompanied by further technological advances that counterintuitively slow GDP growth through the "new-products-Baumol" mechanism illustrated here. I don't think that's so implausible, and hopefully the note I'll write later will make it clearer where I'm coming from on that.
But this post isn't aiming to argue for the plausibility, just the possibility. It seems to me that a lot of discussion of this issue hasn't noticed that it's even a theoretical possibility.
Here's an example in which utility is additively separable, un(.) is identical for all goods, the productivity and quantity of all goods grow hyperbolically, and yet GDP grows exponentially.
I've been thinking about this post for days, which is a great sign, and in particular I think there's a deep truth in the following:
I realize this is tangential to your point about GDP measurement, but I think Uzawa's theorem probably set growth theory back by decades. By axiomatizing that technical change is labor-augmenting, we became unable to speak coherently about automation, something that is only changing recently. I think there is so much more we can understand about technical change that we don't yet. My best guess of the nature of technological progress is as follows:
This idea is given some empirical support by Hubmer 2022 and theoretical clarity by Jones and Liu 2024, but it's still just a conjecture. So I think the really important question about AI is whether the tons of new products it will enable will themselves be labor-intensive or capital-intensive. If the new products are capital-intensive, breaking with historical trend, then I expect that the phenomenon you describe (good 2's productivity doesn't grow) will not happen.
Great to hear, thanks!
As for the prediction—fair enough. Just to clarify though, I’m worried that the example makes it look like we need growth in the new good(s) to get this weird slow GDP growth result, but that’s not true. In case that’s the impression you got, this example illustrates how we can have superexponential growth in every good but (arbitrarily slow) exponential growth in GDP.
Executive summary: Full automation may lead to ambiguous GDP growth outcomes, as the introduction of new goods can decouple GDP from actual technological advancements and societal welfare.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.