I just posted an explanation of why I think the scenario in my fable is even more intractable than it appears: De Dicto and De Se Reference Matters for Alignment.
Why only a few million? You'll have to kill 9 billion people, and to what purpose? I don't see any reason to think that the current population of humans wouldn't be infinitely sustainable. We can supply all the energy we need with nuclear and/or solar power, and that will get us all the fresh water we need; and we already have all the arable land that we need. There just isn't anything else we need.
Re. "You had mentioned concern about there being no statements of existential threat from climate change. Here's the UN Secretary General's speech on climate change where he claims that climate change is an existential threat."
No; I said that when I traced claims of existential threat from climate change back to their source, the trail always led back to the IPCC, and the latest IPCC summary report didn't mention anything remotely close to an existential threat to humans. This is yet another instance--the only source cited is the IPCC.
Thanks! That's a lot to digest. Do you know how "government approval" of IPCC reports is implemented, e.g., does any one government have veto power over everything in the report, and is this approval granted by leaders, political appointees, or more-independent committees or organizations?
Re. "Right now, I believe that all renewables are a sideshow, cheap or not, until we grasp that population decline and overall energy consumption decline are the requirements of keeping our planet livable for our current population" -- How does this belief affect your ethics? For instance, does this mean the US should decrease immigration drastically, to force poor countries to deal with their population problem? Should the US reduce grain exports? How would you approach the problem that the voluntary birth rate is higher in dysfunctional and highly-religious cultures than in stable developed secular ones? What are we to do about religions which teach that contraception is a sin?
I was hoping for an essay about deliberately using nonlinear systems in constructing AI, because they can be more-stable than the most-stable linear systems if you know how to do a good stability analysis. This was instead an essay on using ideas about nonlinear systems to critique the AI safety research community. This is a good idea, but it would be very hard to apply non-linear methods to a social community. The closest thing I've seen to doing that was the epidemiological models used to predict the course of Covid-19.
The essay says, "The central lesson to take away from complex systems theory is that reductionism is not enough. It’s often tempting to break down a system into isolated events or components, and then try to analyze each part and then combine the results. This incorrectly assumes that separation does not distort the system’s properties." I hear this a lot, but it's wrong. It assumes that reductionism is linear--that you want to break a nonlinear system into isolated components, then relate them to each other with linear equations.
Reductionism can work on nonlinear systems if you use statistics, partial differential equations, and iteration. Epidemiological models and convergence proofs for neural networks are examples. Both use iteration, and may give only statistical claims, so you might still say "reductionism is not enough" if you want absolute certainty, e.g., strict upper bounds on distributions. But absolute certainty is only achievable in formal systems (unapplied math and logic), not in real life.
The above essay seems to me to be trying to use linear methods to understand a nonlinear system, decomposing it into separable heuristics and considerations to be attended to, such as the line-items in the flow charts and bulleted lists above. That was about the best you could do, given the goal of managing the AI safety community.
I'd really like to see you use your understanding of complex systems either to try to find some way of applying stability analysis to different AI architectures, or to study the philosophical foundations of AI safety as it exists today. The latter use assumptions of linearity, analytic solvability, distrust of noise and evolution, and a classical (i.e., ancient Greek) theory of how words work, which expects words to necessarily have coherent meanings, and for those meanings to have clear and stable boundaries, and requires high-level foundational assumptions because the words are at a high level of abstraction. This is all especially true of ideas that trace back to Yudkowsky. I think these can all be understood as stemming from over-simplifications required for linear analysis. They're certainly strongly correlated with it.
I dumped a rant that's mostly about the second issue (the metaphysics of the AI safety community today) onto this forum recently, here, which is a little more specific, though I fear perhaps still not specific enough to be better than saying nothing.
Thanks for the link to Halstead's report!
I can't be understating the tail risks, because I made no claims about whether global warming poses existential risks. I wrote only that the IPCC's latest synthesis report didn't say that it does.
I thought that climate change obviously poses some existential risk, but probably not enough to merit the panic about it. Though Halstead's report that you linked explicitly says not just that there's no evidence of existential risk, but that his work gives evidence there is insignificant existential risk. I wouldn't conclude "there is insignificant existential risk", but it appears that risk lies more in "we overlooked something" than in evidence found.
The only thing I was confident of was that some people, including a member of Congress, incited panic by saying global warming was an imminent thread to the survival of humanity, and the citation chain led me back to that IPCC report, and nothing in it supported that claim.
I'm not claiming to have outsmarted anyone. I have claimed only that I have read the IPCC's Fifth Synthesis Report, which is 167 pages, and it doesn't report any existential threats to humans due to climate changes. It is the report I found to be most often-cited by people claiming there are existential threats to humans due to global warming. It does not support such claims, not even once among its thousands of claims, projections, tables, graphs, and warnings.
Neither did I claim that there is no existential threat to humanity from global warming. I claimed that the IPCC's 5th Synth report doesn't suggest any existential threat to humanity from global warming.
Kemp is surely right that global warming "is" an existential threat, but so are asteroid strikes. He's also surely right that we should look carefully at the most-dangerous scenarios. But, skimming Kemp's paper recklessly, it doesn't seem to have any quantitative data to justify the panic being spread among college students today by authorities claiming we're facing an immediate dire threat, nor the elevation of global warming to being a threat on a par with artificial intelligence, nor the crippling of our economies to fight it, nor failing to produce enough oil that Europe can stop funding Russia's war machine.
And as I've said for many years: We already have the solution to global warming: nuclear power. Nuclear power plants are clearly NOT an existential threat. If you think global warming is an existential threat, you should either lobby like hell for more nuclear power, or admit to yourself that you don't really think global warming is an existential threat.
I don't think the IPCC is now looking more at scenarios with a less than 3C rise in temperature out of conservatism, but because they don't see a rise above 3C before 2100 except in RCP8.5 (Figure 2.3), which is now an unrealistically high-carbon scenario; and they were sick of news agencies reporting RCP8.5 as the "business as usual" case. (It was intended to represent the worst 10% out of just those scenarios in which no one does anything to prevent climate change.)
The IPCC's 5th Synth Report dismisses Kemp's proposed "Hothouse Earth" tipping point on page 74. Kemp's claim is based on a 2018 paper, so it is the more up-to-date claim. But Halstead's report from August 2022 is even more up-to-date, and also dismisses the Hothouse Earth tipping point.
Anyway. Back to the 5th Synth Report. It contains surprisingly little quantitative information; what it does have on risks is mostly in chapter 2. It presents this information in a misleading format, rating risks as "Very low / Medium / Very high", but these don't mean a low, medium, or high expected value of harm. They seem to mean a low, medium, or high probability of ANY harm of the type described, or, if they're smart, some particular value range for a t-test of the hypothesis of net harm > 0.
The text is nearly all feeble claims like this: "Climate change is expected to lead to increases in ill-health in many regions and especially in developing countries with low income, as compared to a baseline without climate change... From a poverty perspective, climate change impacts are projected to slow down economic growth, make poverty reduction more difficult, further erode food security and prolong existing and create new poverty traps, the latter particularly in urban areas and emerging hotspots of hunger (medium confidence). ... Climate change is projected to increase displacement of people (medium evidence, high agreement)."
I call these claims feeble because they're unquantitative. In nearly every case, no claim is made except that these harms will greater than zero. Figure SPM.9 is an exception; it shows significant predicted reductions in crop yield, with an expected value of around a 10% reduction of crop yields in 2080 AD (eyeballing the graph). Another exception is Box 3.1 on p. 79, which says, "These incomplete estimates of global annual economic losses for temperature increases of ~2.5°C above pre-industrial levels are between 0.2 and 2.0% of income (medium evidence, medium agreement)." Another exception shows predicted ocean level rise (and I misspoke; it predicts a change of a bit more than 1 foot by 2100 AD). None of the few numeric predictions of harm or shortfall that it predicts are frightening.
In short, I'm not saying I've evaluated the evidence and decided that climate change isn't threatening. I'm saying that I read the 5th Synthesis Report, which I read because it was the report most-commonly cited by people claiming we face an existential risk, and found there is not one claim anywhere in it that humans face an existential risk from climate warming. I would say the most-alarming claim in the report is that crop yields are expected to be between 10% and 25% lower in 2100 than they would be without global warming. This is still less of an existential risk than population growth, which is expected to cause a slightly greater shortfall of food over that time period; and we have 80 years to plant more crops, eat fewer cows, or whatever.
You wrote, "What we are facing, and which is well described in the IPCC reports (more so in the latest one), is that there are big challenges ahead when it comes to crops and food security, fresh water supply, vector borne diseases, and mass displacement due to various factors." But the report I read suggests only that there are big challenges ahead when it comes to crops, as I noted above. For everything else, it just says that water supply will decline, diseases will increase, and displacement will increase. It doesn't say, nor give any evidence, that they'll decline or increase enough for us to worry about.
The burden of proof is not on me. The burden of proof is on the IPCC to show numeric evidence that the bad things they warn us about are quantitatively significant, and on everyone who cited this IPCC report to claim that humanity is in serious danger, to show something in the report that suggests that humanity is in serious danger. I'm not saying there is no danger; I'm saying that the source that's been cited to me as saying there is serious existential danger, doesn't say that.
(Halstead's report explicitly says, "my best guess estimate is that the indirect risk of existential catastrophe due to climate change is on the order of 1 in 100,000 [over all time, not just the next century], and I struggle to get the risk above 1 in 1,000." Dinosaur-killing-asteroid strike risk is about 1 / 50M per yr, or 1/500K per century.)
You're right. Thanks! It's been so long since I've written conversions of English to predicate logic.