VG

Vasco Grilo

2736 karmaJoined Working (0-5 years)Lisbon, Portugal

Participation
3

  • Completed the Precipice Reading Group
  • Completed the In-Depth EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
655

Topic contributions
1

Interesting post, Jim!

In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor.

I think suffering may actually be a limiting factor. There is a point beyond which worsening the conditions in factory-farms would not increase productivity, because the increase in mortality and disability (and therefore suffering) would not be compensated by the decrease in costs. In general, if pain is sufficiently severe, animals will be physically injured, which limits how useful they will be.

Thanks for sharing your thoughts, Mariana!

(1) the effects of climate change are not a probabilty but already occurring so may not be understood as hierarcharally similar to the risk of an ASRS that has not still occurred

Agreed:

“[Global warming] has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss”.

However, I only focussed on the extreme effects of climate change, which are hypothetical as the risks of ASRSs. It also believe that, even from a neartermist perspective, it is also pretty unclear whether climate change is good/bad due to effects on wild animals.

(2) an ASRS could be more probable in a world exposed to higher tensions related with food shocks occurring as a result of climate change

That makes sense, but I do not think it is a major issue:

my model implicitly considers climate change does not impact the risk from ASRSs via increased risk from nuclear war. I believe this is about right, in agreement with Chapter 12 of John Halstead’s report on climate change and longtermism:

  • Most of the indirect risk from climate change flows through unaligned artificial intelligence, engineered pandemics, and unforeseen anthropogenic risks, whose existential risk between 2021 and 2120 is guessed by Toby Ord in The Precicipe to be 100, 33.3, and 33.3 times that of nuclear war. Nevertheless, there is significant uncertainty around these estimates[3].
  • Conflicts between India and China/Pakistan are the major driver for the risk from climate change, but these only have 7.15 % (= (160 + 350 + 165)/9,440) of the global nuclear warheads according to these data from Our World in Data (OWID).

I think (3) and (4) are great points!

(3) specifically thinking on the model results, I wonder if any of these models has been able to dissect the effect of lower temperatures to those related with lower radiation and lower rainfall associated with the ASRS (otherwise overall effects could be underestimated); (4) I also wonder about the reliability of these models to represent the effect of temperature reductions vs. to represent the effect of radiation reductions (probably more accurate in representing T reduction and thus, other effects of an ASRS not attenuated by global warming - reduced rainfall, radiation, UV- , still important in crop failure).

My model does not have an explicit distinction between the effects of temperature, and those of radiation and rainfall. However, the effective soot reduction due to global warming is proportional to a uniform distribution ranging from 0 to 1. The greater the importance of the temperature variation (as opposed to those of radiation and rainfall), the narrower and closer to one the distribution should be. It arguably makes sense to use a wide distribution given our uncertainty. However:

Ideally, one should run the climate and crop models for each level of global warming [thus obtaining updated temperature, radiation and rainfall profiles], since the climate response caused by ASRSs depends on the pre-catastrophe global mean temperature. As an example of why this might be relevant, I do not know whether there is a good symmetry between the regional effects of global cooling and warming.

Hi Christian,

I have now had a better look at the report. It really is great!

Second, this argument ["one argument against studying how to keep nuclear war limited is that doing so would itself make nuclear war seem “winnable” and thus weaken the nuclear taboo"] neglects the possibility that e.g. civil defense interventions increase the attacker’s uncertainty about the effects of their weapons, thus potentially making nuclear use less likely.

In general, I think uncertainty makes war more likely. It is one of the 5 reasons Chris Blattman gives for wars hapenning:

Chris argues that social scientists have generated five cogent models of when war can be ‘rational’ for both sides of a conflict:

  1. Unchecked interests — such as national leaders who bear few of the costs of launching a war.
  2. Intangible incentives — such as an intrinsic desire for revenge.
  3. Uncertainty — such as both sides underestimating each other’s resolve to fight.
  4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future.
  5. Misperceptions — such as our inability to see the world through other people’s eyes.

You say that:

According to the bioweapons scholar Malcolm Dando, part of the reason for the Nixon administration’s abandonment in the late 1960s and early 1970s of biological weapons was not only that biological weapons were less useful than nuclear weapons, but that they were more difficult to monopolize

These reasons continue to hold, so one should expect the subsitute deterrent to have similar properties as nuclear weapons? How about just keeping the nuclear weapons in submarines (supposing that was politically feasible; I guess it would be hard)?

Similar attitudes may have existed in France and Britain; interest in bioweapons declined as capabilities in nuclear weapons increased, as evidenced in a shift in budgetary allocations from the 1950s to the 1970s away from biological and towards nuclear programs.[174]

Is there any concrete evidence about countries with nuclear weapons saying they would have to invest in biological weapons if they decreased their nuclear weapons today (as opposed to 60 years ago)? If biological weapons were a subsitute for nuclear weapons, one would expect greater development of biological weapons as the number of nuclear weapons decreased from 64 k in 1986 to 9.4 k in 2022. Did this happen? Maybe looking into accidents could offer an idea.

Hi Rob,

Nice to know you are interviewing Anders! Some questions:

  • Should more resources be directed towards patient philanthropy at the margin? How much more/less?
  • How binary is longterm value? Relevant to the importance of the concept of existential risk.
  • Should the idea that more global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios (ASRSs) be taken seriously? (Anders is at the board of ALLFED, and therefore has knowledge about ASRSs.)
  • Which fraction of the expected effects of neartermist interventions (e.g. global health and development, and animal welfare) flow through longtermis considerations (e.g. longterm effects of changing population size, or expansion of the moral circle)?
  • Under moral realism, are we confident that superintelligent artificial intelligence disempowering humans would be bad?
  • Should we be uncertain about whether saving lives is good/bad because of the meat eater problem?
  • What is the chance that the time of perils hypothesis is true (e.g. how does the existential risk this century compare to that over the next 1 billion years)? How can we get more evidence for/against it? Relevant because, if existential risk is spread out over a long time, reducing existential risk this century has a negligible effect on total existential risk, as discussed by David Thorstad.
  • How high is the chance of AGI lock-in this century?
  • What can we do to ensure a bright future if there are advanced aliens on or around Earth (Magnus Vinding's thoughts)? More broadly, should humanity do anything differently due to the possibility of advanced civilisations which did not originite on Earth?
  • How much weight should one give to the XPT's forecasts? The ones regarding nuclear extinction seem way too pessimistic to be accurate. Superforecasters and domain experters predicted a likelihood of nuclear extinction by 2100 of 0.074 % and 0.55 %. My guess would be something like 10^-6 (10 % of a global nuclear nuclear war involving tens of detonations, 10 % of it escalating to thousands of detonations, and 0.01 % of that leading to extinction), in which case superforecasters would be off by 3 orders of magnitude.

That makes sense to me. The overall neglectedness of post-catastrophe interventions in area A depends on the neglectedness of area A, and the neglectedness of post-catastrophe interventions within area A. The higher each of these 2 neglectednesses, the higher the cost-effectiveness of such interventions.

What I meant with my previous comment was that, even if right-of-boom interventions to decrease nuclear risk were as neglected as left-of-boom ones, it could still be the case that nuclear risk is super neglected in society.

Thanks for the fair feedback, Johannes!

Just one note on:

So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.

I do not think there is a contradiction. The multiplier of 30 would only suggest that left-of-boom and right-of-boom interventions are similarly neglected, and therefore similarly effective neglecting other considerations. However, it could still be the case that the marginal cost-effectiveness of left-of-boom and right-of-boom interventions is much higher that that of governments.

Thanks for writing this, Stijn! Strongly upvoted.

I just wanted to note that there is also a benefit cascade for the same reasons there is a harm cascade.

Thanks, Nicholas!

I think it would be nice to have other causes besides factory farming, global health and development, and climate change, namely 80,000 Hours' 4 most pressing problems:

Thanks for elaborating! I can see that right-of-boom spending before the nuclear war is most likely more effective than after it.

I do not know what is the overall multiplier accounting for all of this, and I am not confident it favours right-of-boom spending at the current margin.

To clarify, by "all of this" I meant not just considerations about whether it is better to spend before or after the nuclear war, but also about the expected spending on left- and right-of-boom interventions. I am thinking along these lines:

  • Left-of-boom spending is currently at 30 M$/year.
  • The expected right-of-boom spending is 1 G$/year, for a probability of 0.1 %/year of a global nuclear war leading to 1 T$ being invested in right-of-boom spending.
  • Right-of-boom spending before nuclear war is 30 times as effective as after the nuclear war, for the reasons you mentioned.
  • So the expected right-of-boom spending (adjusted for effectiveness) is equivalent to 30 M$/year (= 1000/30) of right-of-boom spending before the nuclear.
  • Therefore it is not obvious to me that right-of-boom spending before nuclear war is way more neglected than left-of-boom spending (I got 30 M$/year for both above), even if right-of-boom spending before the nuclear war is most likely more effective than after it.

Basically, what I am saying is that, even if right-of-boom spending after the nuclear war is much less effective, it would be so large that the expected right-of-boom spending adjusted for effectiveness could still be comparable with current left-of-boom spending. Does this make sense?

Note I am not claiming that left-of-boom spending is more/less effective than right-of-boom spending before nuclear war. I am just suggesting that left- and right-of-boom spending may not have super different levels of neglectedness.

Thanks for sharing, Christian!

To summarize:

  • “Right of boom” interventions — focusing on problems arising after the first use of nuclear weapons, such as escalation control — are an important part of risk reduction.
  • These very interventions have been severely neglected by philanthropic funders, possibly for ideological reasons.
  • These facts suggest that prioritizing “right of boom” interventions is a promising impact multiplier for funders on the margin.

Relatedly, I commented before that:

You considered actual (past) spending to assess neglectedness, as is often done, but should the focus be on expected future spending?

For the left-of-boom spending, nothing is changed, because by definition there is no spending of this kind after the start of a nuclear war.

However, expected right-of-boom spending is dominated by the spending in the event of a nuclear war. For example, assuming a probability of 0.1 % of full scale nuclear war in 2023, and a right-of-boom spending of 1 T$ [1 trillion] if it materialises, the expected right-of-boom spending would be 1 G$ [1 billion]. This is 1.95 and 59.8 times your estimates for the left-of-boom and right-of-boom spending between 2012 and 2022. So there is a sense in which left-of-boom spending is more neglected.

I can see how right-of-boom spending may be more effective before nuclear war. Coming up with good interventions in little time may be harder, since some thinking may not be parallelizable. On the other hand, right-of-boom spending after the start of the nuclear war might be more effective due to more information being available (e.g. about the countries involved, and their motives for starting the war). I do not know what is the overall multiplier accounting for all of this, and I am not confident it favours right-of-boom spending at the current margin.

I suppose the same argument applies to other causes. For example, for global warming, it is true interventions targetting warming higher than 3 ºC are neglected if we look at past spending. However, this underestimates the respective expected future spending. Resources would start flowing towards such interventions if our best guess for warming by 2100 moved upwards. Does Founders Pledge's robust diversification approach account for this?

Load more