Magnus Vinding

Researcher @ Center for Reducing Suffering
1823 karmaJoined Copenhagen, Denmark
magnusvinding.com/

Bio

Working to reduce extreme suffering for all sentient beings.

Author of Suffering-Focused Ethics: Defense and Implications; Reasoned Politics; & Essays on Suffering-Focused Ethics.

Co-founder (with Tobias Baumann) of the Center for Reducing Suffering (CRS).

Ebooks available for free here and here.

Comments
107

Topic contributions
6

Naive projection about o4 and beyond

The Codeforces Elo progression from o1-mini to o3-mini was around 400 points (with compute costs held constant). Similarly, the Elo jumps from 4o (~800) to o1-preview (~1250) to o1-mini (~1650) were also each around 400 points (the compute costs of 4o appear similar to those of o1-mini, while they're higher for o1-preview).

People from OpenAI report that o4 is now being trained and that training runs take around three months in the current "reasoning paradigm". So if we were to engage in naive projection, we might project a continued ~400 point Codeforces progression every three months.

Below is a naive such projection for the o1-mini cost range, with the dates referring to when model scores are announced (not when the models are released).

  • March 2025 (March 14th?): o4 ~2400
  • June 2025: o5 ~2800
  • September 2025: o6 ~3200
  • December 2025: o7 ~3600
    • If high compute adds around 700 Elo points for full o7 (as it does for o3), this would give full o7 a superhuman score of ~4300
  • March 2026: o8 ~4000 (a score only ever achieved by two people)
  • June 2026: o9 ~4400 (superhuman level for cheap)

Part of the motivation for making such a naive projection is that it can provide a salient yardstick to hold future progress up against, to notice whether progress on this benchmark is slowing down, keeping pace, or accelerating.

Additionally, as further motivation, one can note that there is some precedent for Elo scores improving linearly over time in other domains, e.g. in chess:

Likewise, while they're more subjective, Elo scores on the LLM leaderboard also appear to have increased fairly consistently by an average of ~20 points per month over the last year (the trend has continued beyond the graph below; the current top 10 average is at the ~1360 level one would have predicted based on a naive extrapolation of the post-2023-11 trendline below):

This is what I meant:

it seems to me like a striking ... kind of coincidence to end at exactly — or indistinguishably close to — ... any position of complete agnosticism

That is, I think it tends to apply to complete and perfect agnosticism in general, even if one doesn't frame or formulate things in terms of 50/50 or the like. (Edit: But to clarify, I think it's less striking the less one has thought about a given choice and the less the options under consideration differ in character; so I think there are many situations in which practically complete agnosticism is reasonable.)

Thanks for your comment :)

fwiw, I think I'm more skeptical than you that we'll ever find evidence robust enough to warrant updating away from radical agnosticism on whether our influence on cosmic actors makes the future better or worse

I guess there are various aspects that are worth teasing apart there, such as: humanity's overall influence on other cosmic actors, a given altruistic community's influence on cosmic actors, individual actions taken (at least partly) with an eye to having a beneficial influence on (or together with) other cosmic actors, and so on. I guess our analyses, our degrees of agnosticism, and our final answers can differ greatly across different questions like these. For example, individual actions might be less difficult to optimize given their smaller scale and given that we have greater control over them (even if they're still very difficult to predict and optimize in absolute terms).

I also think a lot depends on the meaning of "radical agnosticism" here. A weak interpretation might be something like "we'll generally be pretty close to 50/50, all things considered". I'd agree that, in terms of long-term influence, that's likely to be the best we can do for the most part (though I also think it's an open question, and I don't see much reason to be firmly convinced of, or committed to, the view that we won't ever be able to do better).

A stronger interpretation might be something like "we'll practically always be exactly at — or indistinguishably close to — 50/50, all things considered". That version of radical agnosticism strikes me as too radical. On its face, it can seem like a stance of exemplary modesty, yet on closer examination, I actually think it's the opposite, namely an extremely strong claim. I mean, it seems to me like a striking "throw a ball in the air and have it land and balance perfectly on a needle" kind of coincidence to end at exactly — or indistinguishably close to — 50/50 (or at any other position of complete agnosticism, e.g. even if one rejects precise credences).[1]

For example, I think the point about how we can't rule out that we might find better, more confident answers in the future (e.g. with the help of new empirical insights, new conceptual frameworks, better AI tools, and so on) is alone a reason not to accept such "strong" radical uncertainty, as this point suggests that further exploration is at least somewhat beneficial in expectation.

  1. ^

    For example, if you've weighed a set of considerations that point vaguely in one direction, it would seem like quite a coincidence if "unknown considerations" were to exactly cancel out those considerations. I see you've discussed whether unknown considerations might be positively correlated with known considerations, but it seems that even zero correlation (which is arguably a defensible prior) would still lead you to go with the conclusion drawn based on the known considerations; you'd seemingly need to assume a (weakly) negative correlation to consistently get back to a position of complete agnosticism.

Thanks for letting me know. :) I wasn't aware that Smashwords required registration. The PDF is also available here (expanded edition here).

On "cold computing": to clarify, the piece I linked to was not about aestivation / waiting. It was about using "cold computing" right away.

The comment from gwern lists some reasons that may speak against "cold computing" (in general) as playing a significant role in answering the Fermi question, but again, a question is how decisive those reasons are. Even if such reasons should lead us to think that "cold computing" plays no significant role with 95 percent confidence, it still seems worth avoiding the mistake of belief digitization: simply collapsing the complementary 5 percent down to 0.

In any case, the point about "cold computing" was merely a disjunctive possibility; the broader point about observer prevalence being unclear in 'grabby vs. quiet expansionist scenarios that include sims' does not rest on that particular possibility.

On simulations: I think it can make sense to set the simulation argument aside, at least provisionally, for a couple of reasons:

  1. The hypothesis that ancestor simulations (e.g. exact copies of your current conscious experience) are impossible to create seems like a plausible hypothesis that is worth exploring in its own right. (One can think that it is worth exploring even if one believes that faithful ancestor simulations are most likely possible.)
  2. Even if we grant that ancestor simulations are possible and trivially feasible, it still makes sense to explore the non-sim (or pre-sim) case, since that would presumably apply to the original simulators (if we assume an ancestor simulation picture in which our world at least roughly matches the original simulators' world). After all, if the anthropic argument holds for the OG simulators, then it would also hold for their ancestor simulations, assuming that those simulations really are ancestor simulations (somewhat analogously to a proof by induction). In this way, the 'non-sim case' seemingly has significant implications for what kind of simulation one should expect to be in (at least given the preceding assumptions).

If one includes sims, grabby civs would possibly but not necessarily have more observers (like us) than quiet expansionist civs. For example, the expected number of sims may be roughly the same, or even larger, in quiet expansionist scenarios that involve a deadline/shift (cf. sec. 4).[1] There's also the possibility that computation could be more efficient in quiet regimes (some have argued along these lines, though I'm by no means saying it's correct; I'm not sure if we currently understand physics well enough to make confident pronouncements either way).

But yes, the argument outlined in Section 3 was limited to "base reality" scenarios. Conditional on you not being in a simulation (e.g. if exact sims of your conscious experience are not possible), the anthropic argument in Section 3 suggests that you're in a quiet expansionist scenario, or in a quiet expansionist region within a mixed scenario. Conditional on you being in a simulation, it seems unclear.

  1. ^

    Why might it be even larger? Intuitively, one might think that grabby civs could start simulating earlier, since they don't have to wait and be quiet. But in the quiet expansionist model, expansionist civ origin dates would, in expectation, be significantly earlier, since we could be past the point where they've fully colonized. That is, in a grabby model, we'd now be pre-deadline and pre-colonized, whereas we may be "post-colonized" in the quiet expansionist model — indeed, we most likely would be if the hard-steps model is correct. So the expansionist civs would be considerably older (they could even be much older) in the quiet expansionist vs. the grabby model. Thus, if we only look at the past, it's conceivable that quiet civs would be able to run more sims, even if they have considerably fewer sims per colonized volume (as they might make up for it by having far more time and volume).

    At any rate, given the apparent size of the cosmic future compared to the past, what matters most for the expected number of sims is hardly earliness (e.g. full cosmic expansion at 9 vs. 15 billion years), but arguably more something like future willingness and capacity to devote resources toward simulations. And when it comes to the willingness aspect, I can see some reasons to think that civs that started out as quiet expansionists up till our point (not necessarily staying that way) might have more incentive to simulate vs. grabby ones. For example, the strategic situation and motives in quiet expansionist scenarios would plausibly be more concerned with potential adversaries from elsewhere, and civs in such scenarios may thus be significantly more inclined to simulate the developmental trajectories of potential adversaries from elsewhere, or civs that could give information about such adversaries. Of course, this is speculative, but it serves to show that the picture with sims is complicated and the upshots are non-obvious.

The dark matter thought has crossed my mind too (and others have also speculated along those lines). Yet the fact that dark matter appears to have been present in the very early universe speaks strongly against it — at least when it comes to the stronger "be" conjecture, less so the weaker "contain" conjecture, which seems more plausible.

I see, thanks for clarifying.

In terms of potential tradeoffs between expansion speeds vs. spending resources on other things, it seems to me that one could argue in both directions regarding what the tradeoffs would ultimately favor. For example, spending resources on the creation of Dyson swarms/other clearly visible activity could presumably also divert resources away from maximally fast expansion. (There is also the complication of transmitting the resulting energy/resources to frontier scouts, who might be difficult to catch up with if they are at ~max speeds.)

By rough analogy, if a human army were to colonize a vast (initially) uninhabited territory at max speed, it seems plausible that the best way to do so is by having frontier scouts rush out there in a nimble fashion, not by devoting a lot of resources toward the creation of massive structures right away. (And if we consider factors beyond speed, perhaps not being clearly visible also has strategic advantages if we add uncertainty about whether the territory really is uninhabited — an uncertainty that would presumably be present to some extent in all realistic scenarios.)

Of course, one could likewise make analogies that point in the opposite direction, but my point is simply that it seems unclear, at least to me, whether these kinds of tradeoff considerations would overall favor "loud civ expansion speed > quiet civ expansion speed" (assuming that there are meaningful tradeoffs).

Besides, FWIW, it seems quite plausible to me that advanced civs would be able to expand at the maximum possible speed regardless of whether they opted to be loud or quiet (e.g. they might not be driven by star power, or their technology might otherwise be so advanced that these contrasting choices do not constrain them either way).

Thanks for your comment. :) One reason I didn't use the term "zoo hypothesis" is that I've seen it defined in rather different ways. Relatedly, I'm unsure what you mean by zoo vs. natural reserve hypotheses/scenarios. How are these different, as you use these terms? Another question is whether proportions of zoos vs. natural reserves on Earth can necessarily tell us much about "zoos" vs. "natural reserves" in a cosmic context.

Thanks for your comment, Jim. :)

Why would you expect grabby aliens to expand faster than quiet expansionist ones? I didn't readily find a reason in your linked piece, and I don't see why loud vs. quiet per se should influence expansion speeds; both could presumably approach the ultimate limit of what is physically possible?

Load more