Magnus Vinding

Researcher @ Center for Reducing Suffering
1419 karmaJoined Copenhagen, Denmark
magnusvinding.com/

Bio

Working to reduce extreme suffering for all sentient beings.

Author of Suffering-Focused Ethics: Defense and Implications; Reasoned Politics; & Essays on Suffering-Focused Ethics.

Co-founder (with Tobias Baumann) of the Center for Reducing Suffering (CRS).

Ebooks available for free here and here.

Comments
81

Topic contributions
3

The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article. 

I think that misrepresents what I write and "advocate" in the essay. Among various other qualifications, I write the following (emphases added):

I should also clarify that the decision-related implications that I here speculate on are not meant as anything like decisive or overriding considerations. Rather, I think they would mostly count as weak to modest considerations in our assessments of how to act, all things considered.

My claims about how I think these would be "weak to modest considerations in our assessments of how to act" are not predicated on the exact manner in which I represent my beliefs: I'd say the same regardless of whether I'm speaking in purely qualitative terms or in terms of ranges of probabilities.

In summary, people should either start stating their uncertainty explicitly, or they should start saying "I don't know".

FWIW, I do state uncertainty multiple times, except in qualitative rather than quantitative terms. A few examples:

This essay contains a lot of speculation and loose probability estimates. It would be tiresome if I constantly repeated caveats like “this is extremely speculative” and “this is just a very loose estimate that I am highly uncertain about”. So rather than making this essay unreadable with constant such remarks, I instead say it once from the outset: many of the claims I make here are rather speculative and they mostly do not imply a high level of confidence. ... I hope that readers will keep this key qualification in mind.

As with all the numbers I give in this essay, the following are just rough numbers that I am not adamant about defending ...

Of course, this is a rather crude and preliminary analysis.

Thanks! :)

Assigning a single number to such a prior, as if it means anything, seems utterly absurd.

I don't agree that it's meaningless or absurd. A straightforward meaning of the number is "my subjective probability estimate if I had to put a number on it" — and I'd agree that one shouldn't take it for more than that.

I also don't think it's useless, since numbers like these can at least help give a very rough quantitative representation of beliefs (as imperfectly estimated from the inside), which can in turn allow subjective ballpark updates based on explicit calculations. I agree that such simple estimates and calculations should not necessarily be given much weight, let alone dictate our thinking, but I still think they can provide some useful information and provoke further thought. I think they can add to purely qualitative reasoning, even if there are more refined quantitative approaches that are better still.

You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from?

It was in large part based on the considerations reviewed in the section "I. An extremely low prior in near aliens". The following sub-section provides a summary with some attempted sanity checks and qualifications (in addition to the general qualifications made at the outset):

All-things-considered probability estimates: Priors on near aliens

Where do all these considerations leave us? In my view, they overall suggest a fairly ignorant prior. Specifically, in light of the (interrelated) panspermia, pseudo-panspermia, and large-scale Goldilocks hypotheses, as well as the possibility of near aliens originating from another galaxy, I might assign something like a 10 percent prior probability to the existence of at least one advanced alien civilization that could have reached us by now if it had decided to. (Note that I am here using the word “civilization” in a rather liberal sense; for example, a distributed web of highly advanced probes would count as a civilization in this context.) Furthermore, I might assign a probability not too far from that — maybe around 1 percent — to the possibility that any such civilization currently has a presence around Earth (again, as a prior).

Why do I have something like a 10 percent prior on there being an alien presence around Earth conditional on the existence of at least one advanced alien civilization that could have reached us? In short, the main reason is the info gain motive that I explore at greater length below. Moreover, as a sanity check on this conditional probability, we can ask how likely it is that humanity would send and maintain probes around other life-supporting planets assuming that we became technologically capable of doing this; roughly 10 percent seems quite sane to me.

At an intuitive level, I would agree with critics who object that a ~1 percent prior probability in any kind of alien presence around Earth seems extremely high. However, on reflection, I think the basic premises that get me to this estimate look quite reasonable, namely the two conjunctive 10-percent probabilities in “the existence of at least one advanced alien civilization that could have reached us by now if it had decided to” and “an alien presence around Earth conditional on the existence of at least one advanced alien civilization that could have reached us”.

Note also that there are others who seem to defend considerably higher priors regarding near aliens (see e.g. these comments by Jacob Cannell; I agree with some of the points Cannell makes, though I would frame them in more uncertain and probabilistic terms).

I can see how substantially lower priors than mine could be defensible, even a few orders of magnitude lower, depending on how one weighs the relevant arguments. Yet I have a hard time seeing how one could defend an extremely low prior that practically rules out the existence of near aliens. (Robin Hanson has likewise argued against an extremely low prior in near aliens.)

Thanks for your comment. I basically agree, but I would stress two points.

First, I'd reiterate that the main conclusions of the post I shared do not rest on the claim that extraordinary UFOs are real. Even assuming that our observed evidence involves no truly remarkable UFOs whatsoever, a probability of >1 in 1,000 in near aliens still looks reasonable (e.g. in light of the info gain motive), and thus the possibility still seems (at least weakly) decision-relevant. Or so my line of argumentation suggests.

Second, while I agree that the wild abilities are a reason to update toward thinking that the reported UFOs are not real objects, I also think there are reasons that significantly dampen the magnitude of this update. First, there is the point that we should (arguably) not be highly confident about what kinds of abilities an advanced civilization that is millions of years ahead of us might possess. Second, there is the point that some of the incidents (including the famous 2004 Nimitz incident) involve not only radar tracking (as reported by Kevin Day in the Nimitz incident), but also eye-witness reports (e.g. by David Fravor and Alex Dietrich in the case of Nimitz), and advanced infrared camera (FLIR) footage (shot by Chad Underwood during Nimitz). That diversity of witnesses and sources of evidence seems difficult to square with the notion that the reported objects weren't physically real (which, of course, isn't to say that they definitely were real).

When taking these dampening considerations into account, it doesn't seem to me that we have that strong reason to rule out that the reported objects could be physically real. (But again, the main arguments of the post I shared don't hinge on any particular interpretation of UFO data.)

I think it would have been more fair if you hadn't removed all the links (to supporting evidence) that were included in the quote below, since it just comes across as a string of unsupported claims without them:

Beyond the environmental effects, there are also significant health risks associated with the direct consumption of animal products, including red meat, chicken meat, fish meat, eggs and dairy. Conversely, significant health benefits are associated with alternative sources of protein, such as beans, nuts, and seeds. This is relevant both collectively, for the sake of not supporting industries that actively promote poor human nutrition in general, as well as individually, to maximize one’s own health so one can be more effectively altruistic.

I think this evidence on personal health is relevant in the ways described. I don't think it's fair to say that the quote above implies that “[health benefits] will definitely happen with no additional work from you, without any costs or trade-offs”; obviously, any change in diet will require some work and will involve some tradeoffs. But I agree that it's worth addressing the potential pitfalls of vegan diets, and it's a fair critique that that would have been worth including in that essay (even though a top link on the blog does list some resources on this).

FWIW, in terms of additional work, tradeoffs, and maximizing health, I generally believe that it is worth making a serious investment into figuring out how to optimize one's health, such as by investing in a DNA test for nutrition, and I think this is true for virtually everyone. Likewise, I think it's worth being clear that all diets involve tradeoffs and risks, including both vegan and omnivore diets (some of the risks associated with the latter are hinted at in the links above: "red meat, chicken meat, fish meat, eggs and dairy").

I didn't claim that there isn't plenty more data. But a relevant question is: plenty more data for what? He says that the data situation looks pretty good, which I trust is true in many domains (e.g. video data), and that data would probably in turn improve performance in those domains. But I don't see him claiming that the data situation looks good in terms of ensuring significant performance gains across all domains, which would be a more specific and stronger claim.

Moreover, the deference question could be posed in the other direction as well, e.g. do you not trust the careful data collection and projections of Epoch? (Though again, Ilya saying that the data situation looks pretty good is arguably not in conflict with Epoch's projections — nor with any claim I made above — mostly because his brief "pretty good" remark is quite vague.)

Note also that, at least in some domains, OpenAI could end up having less data to train their models with going forward, as they might have been using data illegally.

I think it's a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing "fates worse than death".

I'm not talking about people sacrificing themselves or the whole world. Even if we were to adopt a purely survivalist perspective, I think it's still far from obvious that trying to slow things down is more effective than is focusing on other aims. After all, the space of alternative aims that one could focus on is vast, and trying to slow things down comes with non-trivial risks of its own (e.g. risks of backlash from tech-accelerationists). Again, I'm not saying it's clear; I'm saying that it seems to me unclear either way.

We should be doing all we can now to avoid having to face such a predicament!

But, as I see it, what's at issue is what the best way is to avoid such a predicament/how to best navigate given our current all-too risky predicament.

FWIW, I think that a lot of the discussion around this issue appears strongly fear-driven, to such an extent that it seems to get in the way of sober and helpful analysis. This is, to be sure, extremely understandable. But I also suspect that it is not the optimal way to figure out how to best achieve our aims, nor an effective way to persuade readers on this forum. Likewise, I suspect that rallying calls along the lines of "Global moratorium on AGI, now" might generally be received less well than would, say, a deeper analysis of the reasons for and against attempts to institute that policy.

What are the downsides from slowing down?

I'd again prefer to frame the issue as "what are the downsides from spending marginal resources on efforts to slow down?" I think the main downside, from this marginal perspective, is opportunity costs in terms of other efforts to reduce future risks, e.g. trying to implement "fail-safe measures"/"separation from hyperexistential risk" in case a slowdown is insufficiently likely to be successful. There are various ideas that one could try to implement.

In other words, a serious downside of betting chiefly on efforts to slow down over these alternative options could be that these s-risks/hyperexistential risks would end up being significantly greater in counterfactual terms (again, not saying this is clearly the case, but, FWIW, I doubt that efforts to slow down are among the most effective ways to reduce risks like these).

a fast software-driven takeoff is the most likely scenario

I don't think you need to believe this to want to be slamming on the breaks on now.

Didn't mean to say that that's a necessary condition for wanting to slow down. But again, I still think it's highly unclear whether efforts that push for slower progress are more beneficial than alternative efforts.

I'm not sure what you are saying here? Do you think there is a risk of AI companies deliberately causing s-risks (e.g. releasing a basilisk) if we don't play nice!?

No, I didn't mean anything like that (although such crazy unlikely risks might also be marginally better reduced through cooperation with these actors). I was simply suggesting that cooperation could be a more effective way to reduce risks of worst-case outcomes that might occur in the absence of cooperative work to prevent them, i.e. work of the directional kind gestured at in my other comment (e.g. because ensuring the inclusion of certain measures to avoid worst-case outcomes has higher EV than does work to slow down AI). Again, I'm not saying that this is definitely the case, but it could well be. It's fairly unclear, in my view.

Thanks for your reply, Greg :)

I don't think this matters, as per the next point about there already being enough compute for doom

That is what I did not find adequately justified or argued for in the post.

I think the burden of proof here needs to shift to those willing to gamble on the safety of 100x larger systems.

I suspect that a different framing might be more realistic and more apt from our perspective. In terms of helpful actions we can take, I more see the choice before us as one between trying to slow down development vs. trying to steer future development in better (or less bad) directions conditional on the current pace of development continuing (of course, one could dedicate resources to both, but one would still need to prioritize between them). Both of those choices (as well as graded allocations between them) seem to come with a lot of risks, and they both strike me as gambles with potentially serious downsides. I don't think there's really a "safe" choice here.

All I'm really saying here is that the risk is way too high for comfort

I'd agree with that, but that seems different from saying that a fast software-driven takeoff is the most likely scenario, or that trying to slow down development is the most important or effective thing to do (e.g. compared to the alternative option mentioned above).

Load more