Dangers from our discoveries will pose the greatest long term risk.
Knowledge discovery cannot be known ahead of time, not even approximated.
Consider Andrew Wiles' solution to Fermat's Last Theorem - he was close to abandoning it, a lifelong obsession, the very morning that he solved it! That morning, Wiles's priors were the most accurate in the world. Not only that, since he was on the cusp of the solution, his priors should have been on the cusp of being correct. And yet...
"Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed... he was having a final look to try and understand the fundamental reasons for why his approach could not be made to work, when he had a sudden insight."
Just prior to his eureka moment, no one could have known better than Wiles about the probability of success, and yet the likelihood was opaque even to him.
It's not that Wiles was off, he was wildly off. Right when he was closest, he was perhaps most despaired.
The reason for this is that a good prediction requires at least a decent model, which means knowing all the inputs. David Deutsch's example in The Beginning of Infinity is Russian roulette - we know all the inputs, and so our predictions make sense.
But with predicting discovery, we have to leave a gap in our model because we do not have all the inputs. We can't call this gap "uncertainty" because we can have a measure of uncertainty. With Russian roulette, we know that a pistol won't fire every time, and we can estimate this on the range as well as by measuring tolerances in manufacturing etc. But when something is unknowable, we have no idea how big the gap is. Wiles himself didn't know if he was moments away or many years off - he was utterly blind to the size of the gap in his own likelihood model.
This is as it must be with all human events, because even mundane events are driven by discovery. If I have coffee or tea this morning depends on how I create an understanding of breakfast, my palate, whether I found a new blend of coffee in the store or got curious about the process of tea cultivation. We could tally up all my previous mornings and call it a probability estimate, but so can the astrologer create an estimate. Both are equally meaningless because neither can account for new discoveries.
I think the problem with longtermism is a conflation of uncertainty (which we can factor into our model) vs unknowability (which we cannot factor in).
We can predict the future state of the solar system simply based on measurements of past states and our understanding of gravity. Unless of course humans do something like shove asteroids out of earth's path or adjust Mars' orbit to be more habitable. In that case, we wouldn't find evidence for such alterations in any of our prior measurements.
AGI is another example - it is very similar to Fermat's Last Theorem. How big is the gap in our current understanding? Are we nearly there like Wiles on that morning? Or are we staring down a massive gap in our understanding of information theory or epistemology or physics, or all three? Until we cross the gap, its size is unknowable.
How about Malthus? His model didn't account for the role of discovery of industrialized fertilizer and crop breeding. How could he know ahead of time the size of these contributions?
Two last parts of this. 1) It's meaningless to even speak of these gaps in terms of size. We can't quantify the mental leap required for an insight. Even the phrase "mental leap" is misleading. Maybe "flash of insight" is better. We don't know much about this creative process, but it seems more akin to a change in perspective than the distance of a jump. The latter phrasing contributes to the confusion, since it suggests a kind of labor theory of discovery - X amount of work will produce a discovery of profundity Y.
2) The difficulty of a problem, such as Fermat's Last Theorem or landing on the moon, is itself an attractor, making it almost paradoxically MORE likely to be solved. "We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard."
Any prediction about the future of human events (such as nuclear war or discovery of AGI) must leave a gap for the role of human discovery, and we cannot know the size of that gap (size itself is meaningless in this context) prior to the discovery, not even close - so any such prediction is actually prophesy.
This was also anticipated by Popper's critique of historicism - "It is logically impossible to know the future course of history when that course depends in part on the future growth of scientific knowledge (which is unknowable in advance)."
Let's say I offered you to bet that we'll have a commercially viable nuclear fusion plant operating by 2030 and I said you could take the bet in favour at 100-1 odds, or against, also at 100-1 odds.
(So in the first case you ~100x your money if it happens, in the second you ~100x your money if it doesn't.)
Would you be neutral between taking the 'yes' bet and the 'no' bet?
If not, I think it's because you know we can form roughly informed views and expectations about how likely various advances are, using all kinds of different methods, and need not be completely agnostic.
If you would be indifferent, I think your view is untenable and I would like to make a lot of bets about future technological/scientific progress with you.
I'd take the bet, but the feeling I have that inclines me toward choosing the affirmative says nothing about the actual state of the science/engineering. Even if I research for many hours on the current state of research, this will only affect the feeling I have in my mind. I can assign that feeling a probability, and tell others that the feeling I have is "roughly informed," and I can enroll in Phil Tetlock's forecasting challenge. But all of this learns nothing about the currently unknown discoveries that need to be made in order to bring about cold fusion.
Imagine asking Andrew Wiles the morning of his discovery if he wanted to bet that a solution would be found that afternoon. Given his despair, he might take 100x against. And this subjective sense of things would indeed be well-formed, he could talk to us for hours about why his approach doesn't work. And we'd come away convinced - it's hopeless. But that feeling of hopelessness, unlikelihood, despair - they have nothing to do with the math.
Estimating what remains to be discovered for a breakthrough is like trying to measure a gap but not knowing where to place the other end of the ruler.
It's hard to follow your argument, but how is any of this different from "someone thought X was very unlikely but then X happened, so this shows estimating the likelihood of future events is fundamentally impossible and pointless."
That line of reasoning clearly doesn't work.
Things we assign low probability to in highly uncertain areas happen all the time — but that is exactly what we should expect and is consistent with our credences in many areas being informative and useful.
It's not that "it happened this one time with Wiles, where he really knew a topic and was also way off in his estimate, and so that's how it goes." It's that the Wiles example shows us that we are always in his shoes when contemplating the yet-to-be-discovered, we are completely in the dark. It's not that he didn't know, it's that he COULDN'T know, and neither could anyone else who hadn't made the discovery.
I'm curious to see if you have opinions on Arb's research into the track record of popular science fiction authors trying to predict the future.
I mistakenly included my response to another comment, I'm pasting it below.
Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters - they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps.
Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer - he was familiar with the state of atomic physics and therefore many of the relevant discoveries - he even dedicated the book to an atomic scientist. And Wells's "atomic bombs" were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It's pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don't think we are seeing knowledge of discoveries before they are discovered.
Szilard's prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus's predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).
And this is also the case with discoveries in the long term from now.
Objections to my post read to me like "but people have forecasted things shortly before they have appeared." True, but those forecasts already have much of the relevant discoveries already factored in, though largely invisible to non-experts.
Szilard must have seemed like a prophet to someone unfamiliar with the state of nuclear physics. You could understand a Tetlock who find these seeming prophets among us and declares that some amount of prophesy is indeed possible. But to Wells, Szilard was just making a reasonable step from Wells's idea, which was a reasonable step from earlier discoveries.
As for science fiction writers in general, that's interesting. Obviously, selection effects will be strong (stories that turn out true will become famous), and good science fiction writers are more familiar with the state of the science than others. And finally, it's one thing to make a great guess about the future. It's entirely different to quantify the likelihood of this guess - I doubt even Jules Verne would try to put a number on the likelihood that submarines would eventually be developed.
I disagree with the implications of your example, because Wiles wasn't incentivized to be accurate, and wasn't particularly making an effort to give an accurate probability.
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was "likely enough to be worth it." And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he'd still consider quitting. He might even be more convinced that it's hopeless.
This seems like it's pretty weak evidence given that he did in fact continue.
I think there are some straightforward counterexamples here:
If predictions of anything with human involvement are nonsensical, superforecasters shouldn't exist, humans should learn to talk at wildly differing times, and Mendeleev wasn't doing meaningful inference, he just got really lucky. So I think your claim is far too strong.
All that said, I think that predicting if, and especially when, events that have never previously happened will occur is usually very difficult and merits a lot of humility and uncertainty.
I also think we agree on a weaker version of your claim, which is that EA underestimates the value of data it hasn't yet seen (post here).
I have to look at Tetlock again - there's a difference between predicting what will be determined to be the cause of Arafat's death (historical, fact collecting) and predicting how new discoveries in the future will affect future politics. Nonetheless, I wouldn't be surprised that some people are better than others at predicting future events in human affairs. An example would be predicting that Moore's Law holds next year. In such a case, one could understand the engineering that is necessary to improve computer chips, perhaps understanding that production of a necessary component will half in price next year based on new supplies being uncovered in some mine. This is more knowledge of slight modifications of current understanding (basically, engineering vs. basic science research). It's certainly important and impressive, but it's more refining existing knowledge rathe rather than making new discoveries. Though I do recognize this response reads like me moving the goal posts....
Nice point about human development... I'm not sure how it relates. It seems to me this is biology playing out at a predictable pace. I'd bet that the elements of language development that are not dependent on biology vary greatly in their timelines, and the regularity that this research is discovering is almost purely biological. If we had the technology to do so, we could alter this biological development, and suddenly the old rules about milestones would fail. Put another way - reproducible experiments in psychology tell us about physiology of the brain, but nothing about minds, because mental phenomena are not predictable.
The periodic table is a perfect example of what I'm talking about - Mendelev discovered the periodicity, and then was able to predict features of the natural world (that certain chemical properties would conform to this theory.) So, periodicity was the discovery, and fitting in the elements just conformed to the original discovery.
Here's another way to put my argument - imagine if every person were given a honda civic at age 16. You could imagine that most people would drive honda civics. An alien observer could think "humans are pre-programmed to choose honda civics." But in fact, we are free to choose any car we want, it's just that it's really handy to keep driving the car we were given. Similarly in the real world - there are commonalities and propensities that can be picked up on by superforecasters, but that doesn't mean they can't be overwritten if someone has a mind to do so.
Great points though, I've got some thinking to do.
Yep, I think this is my difficulty with your viewpoint. You argue that there's no way to predict future human discoveries, and if I give you counterexamples your response seems to be 'that's not what I mean by discovery'. I'm not convinced the 'discovery-like' concept you're trying to identify and make claims about is coherent.
Maybe a better example here would be the theory of relativity and the subsequent invention of nuclear weapons. I'm not a physicist, but I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.
I agree we should be very scared of these sorts of breakthroughs, and the good news is many EAs agree with you! See Nick Bostrom's Vulnerable World Hypothesis for example. You don't need to argue against our ability to predict if/when all future discoveries will occur to make this case.
Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters - they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps.
Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer - he was familiar with the state of atomic physics and therefore many of the relevant discoveries - he even dedicated the book to an atomic scientist. And Wells's "atomic bombs" were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It's pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don't think we are seeing knowledge of discoveries before they are discovered.
Szilard's prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus's predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).
And this is also the case with discoveries in the long term from now.
Objections to my post read to me like "but people have forecasted things shortly before they have appeared." True, but those forecasts already have much of the relevant discoveries already factored in, though largely invisible to non-experts.
Szilard must have seemed like a prophet to someone unfamiliar with the state of nuclear physics. You could understand a Tetlock who find these seeming prophets among us and declares that some amount of prophesy is indeed possible. But to Wells, Szilard was just making a reasonable step from Wells's idea, which was a reasonable step from earlier discoveries.
As for science fiction writers in general, that's interesting. Obviously, selection effects will be strong (stories that turn out true will become famous), and good science fiction writers are more familiar with the state of the science than others. And finally, it's one thing to make a great guess about the future. It's entirely different to quantify the likelihood of this guess - I doubt even Jules Verne would try to put a number on the likelihood that submarines would eventually be developed.
Dear Colleagues,
Please review my work, which I have been working on for 34 years.
This is the final, completed version No. 25.
https://www.researchgate.net/publication/374350359_The_Difficulties_in_Fermat's_Original_Discourse_on_the_Indecomposability_of_Powers_Greater_Than_a_Square_A_Retrospect
Sincerely,
Ph.D, Grigoriy Dedenko
I disagree that the unknowns cannot be reasoned about.
There are known unknowns and unknown unknowns, and we can quantify that with " uncertainty".
You can say: "here's this thing I know exists, but I have no measure of it. I estimate it at x".
You can also quantify "unknown unknowns". You can say "there are things that I don't know, and I'm not even aware of them". You can make estimates about this as well.
You can go even further. When considering your model, you can have uncertainty about the accuracy of your model. You can quantify your uncertainty about your model itself.
Your idea of "unknowability" is simply wrong. (I think you're quite confused about how to reason under uncertainty and would benefit from reading about judgment under uncertainty. There's a book of the same name, but there are also many useful posts about it on LessWrong.)
Toby Ord does most of this in his estimates of existential catastrophe in The Precipice.
Making an estimate about something you're unaware of is like guessing the likelihood of the discovery of nuclear energy in 1850.
I can put a number on the likelihood of discovering something totally novel, but applying a number doesn't mean it's meaningful. A psychic could make quantified guesses and tell us about the factors involved in that assessment, but that doesn't make it meaningful.
This argument feels overly clever.
I'm saying the opposite - you can't rank the difficulty of unsolved problems if you don't know what's required to solve them. That's what yet-to-be-discovered means, you don't know the missing bit, so you can't compare.
"With Russian roulette, we know that a pistol won't fire every time, and we can estimate this on the range as well as by measuring tolerances in manufacturing etc. But when something is unknowable, we have no idea how big the gap is."
By this logic, it is just as impossible to say anything about the likelihood of nuclear war as about AGI. There have been nuclear tests and cold-war near-misses before, but so far there has never been a total war between two nuclear-armed nations. The diplomatic and military situations that would arise in the lead-up to war have never been encountered before in history. So the probability of war is totally unknowable! Enemy planes are showing up on our radar -- should we put the military on alert, or should we put banana plantations on alert? Who knows; we can't assign any probability!
And yet, two weeks ago you reasonably wrote that "the danger of nuclear war is greater than it has ever been". I think the arguments in that post show that it is possible to reason and make tradeoffs, even about subjects where we are dealing with lots of uncertainty / "unknowability".
A thanksgiving turkey has an excellent model that predicts the farmer wants him to be safe and happy. But an explanation of thanksgiving traditions tells us a lot more about the risks of slaughter than the number of days the turkey has been fed and protected.
With nuclear war, we have explanations for why nuclear exchange is possible, including as an outcome of a conflict.
Just like with the turkey, we should pay attention to the explanation, not just try to make predictions based on past data.
With all of this, probability terminology is baked into the language and it is hard to speak without incorporating it. With the previous post, it was co-authored, and I wanted to remove that phrase, but concessions were made.
I agree with you, but once again I don't see the difference between the case of the turkey and nuclear war, versus the case of longtermism or AGI. "With nuclear war, we have explanations for why nuclear exchange is possible, including as an outcome of a conflict." Just the same with AGI -- we have explanations for why AGI seems possible, we have some evidence from scaling laws that describe how AI systems get better when given more resources, and ideas about what might motivate people to create more and more powerful AI systems, and why that might be dangerous, etc.
I am not an academically trained philosopher (rather, an engineer!), so I'm not sure what's the best way to talk about probability and make it clear what kind of uncertainty we're talking about. But in all cases, it seems that we should basically use a mixture of empirical evidence based on past experience (where available), and first-principles reasoning about what might be possible in the future. With some things -- mathematical theorems are a great example -- evidence might be hard to come by, so it might be very difficult to predict with precision. But it doesn't seem like we are in fundamentally different, "unknowable" terrain -- it's more uncertain than nuclear war risk, which in turn is more uncertain than forecasting things like housing prices or wheat harvests, which in turn is more uncertain than forecasting that the sun will rise tomorrow. They all seem like part of the same spectrum, and the long-term future of civilization seems important enough that it's worth thinking about even amid high uncertainty.
How about this: let's split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they'll be nonsensical. That's because we cannot know how knowledge creation will affect 2. We don't even need any fancy reasoning, it's already implied in the definition of terms like knowledge creation and discovery. You can't discover something before you discover it, before it's created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don't get involved. However, once we are capable, there's no way now to know what we'll do with the planets and asteroids in the future. Maybe we'll find use for some mineral found predominantly in some asteroids, or maybe we'll use a planet to block heat from the sun as it expands, or maybe we'll detect some other risk/benefit and make changes accordingly.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there's no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that's why we know a fair bit about them). We shouldn't waste effort trying to calculate the risk because we can't do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey - if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it's worth it to take some precautions.
There are a few distinctions that might help with your update:
It seems like your use of the solar system example allows you to assume the first two distinctions apply to knowledge of the solar system. I'm not sure a physicist would agree with your choice of example, but I'm OK with it.
Human reasoning is defeasible, but until an observation provides an update, we do not necessarily consider the unknown beyond making passive observations of the real world.
From my limited understanding of the philosophy behind classic EA epistemics, believing what you know leads to refusing new observations that update your closed world. Thus the emphasis on incomplete epistemic confidence most of the time. So the thinking goes, it ensures that you're not close-minded to always hold out that you think you might be wrong.
When running predictions, until someone provides a specific new item for a list of alternative outcomes (e.g, a new s-risk), the given list is all that is considered. Probabilities are divided among its alternatives when those alternatives are outcomes. The only exhaustive list of alternatives is one that includes a contradictory option, such as:
and that covers all the possibilities. The interesting options are implicit in that last "not A and not B and not C". This is not a big deal, since it's usually the positive statements of options (A, B, or C) that are of interest.
So what's a discovery? It seems like, in your model, it's an alternative that is not listed directly. For example, given:
An unexpected discovery belongs to future 3. All we know about it is that it is not future 1 and not future 2. One way to reframe your line of thought would be to ask:
how can we weight future 3?
A concrete example of discoveries of road surfacing strategies:
That actually looks ridiculous. How do we know that there's a 1% chance that we discover something better than roads?
In a longtermist framework, reasoning by analogy, lets consider some futures, and this example is fiction, not what I believe:
Future 4 has a probability of 55%. But future 4 is simply the unknowable future. What in heck is going on here?
If I understand what you're trying to say, it's that futures like future 4 in that example cannot be assigned a probability or risk. Furthermore, given that future 4 is a mutually exclusive alternative to futures 1, 2, and 3, those futures cannot be assigned a probability either.
Have I made an error in reasoning or did I misunderstand you?
Beautiful! We can't determine "something we haven't thought of" as simply "1 - all the things we've thought of".
Basically, predictions about the future are fine as long as they include the caveat "unless we figure out something else." That caveat can't be ascribed a meaningful probability because we can't know discoveries before we discovery them, we can't know things before we know them.
Well, my basic opinion about forecasting is that probabilities don't inform the person receiving the forecast. Before you commit to weighting possible outcomes, you commit to at least two mutually exclusive futures, X and not X. So what you supply is a limitation on possible outcomes, either X or not X. At best, you're aware of mutually exclusive alternative and specific futures. Then you can limit what not X means to something specific, for example, Y. So now you can say, "The future will contain X or Y." That sort of analysis is enabled by your causal model. As your causal model improves, it becomes easier to supply a list of alternative future outcomes.
However, the future is not a game of chance, and there's no useful interpretation to supply meaningful weights to the future prediction of any specific outcome, unless the outcomes belong to a game of chance, where you're predicting rolls of a fair die, choice of a hand from a deck of cards, etc.
What's worse, that does not limit your feelings about what probabilities apply. Those feelings can seem real and meaningful because they let you talk about lists of outcomes and which you think are more credible.
As a forecaster, I might supply outcomes in a forecast that I consider less credible along with those that I consider more credible. but if you ask me which options I consider credible, I might offer a subset of the list. So in that way weights can seem valuable, because they let you distinguish which you think are more credible and which you can rule out. But the weights also obscure that information because they can scale that credibility in confusing ways.
For example, I believe in outcomes A or B, but I offer A at 30%, B at 30%, C at 20%, D at 10%, and E at 10%. Have I communicated what I intended with my weights, namely, that A and B are credible, that C is somewhat credible, but D and E are not? Maybe I could adjust A and B to 40% and 40%, but now I'm fiddling with the likelihoods of C, D, and E, when all I really mean to communicate is that I like A or B as outcomes and C as an alternate. My probabilities communicate more and differently than I intend. I could make it clear with A and B each at 48% or something, but really now I'm trying to pretend I know what the chances of C, D, and E are, when all I really know about them is that my causal model doesn't support their production much. I could go back and quantify that somehow, but information with which to do that is not available , so I have to pretend confidence in some estimation of the outcomes C, D, and E. My information is not useless, but it's not relevant to weighting all possible outcomes against each other. If I'm forced to provide weights for all the listed outcomes, then I'm forced to figure out how to communicate my analysis in terms of weights so that the audience for my forecast understands what I intend to mean.
In general, analyzing causal models that determine possible futures is a distinct activity from weighting those futures. The valuable information is in the causal models and in the selection of futures based on those models. The extra information on epistemic confidence is not useful and pretends more information than a forecaster likely has. I would go as far as two tiers of selections, just to qualify what I think my causal model implies,
"A or B, and if not those, then C, but not D or E".
Actually, I think someone reading my forecast with weights will just leave with that kind of information anyway. If they try to mathematically apply the weights I chose to communicate my tiers of selections, then they will be led astray, expecting precision when there wasn't any. They would do better to get details of the causal models involved and determine whether those have any merit, particularly in cases of:
so basically in all cases. What might distinguish superforecasters is not their grasp of probability or their ability to update bayesian priors or whatever, but rather the applicability of causal models they develop, and what those causal models emphasize as causes and consequences.
That's the background of my thinking, now here's how I think it relates to what you're saying:
If discoveries influence future outcomes in unknown ways, and your information is insufficient to predict all outcomes, then your causal model makes predictions that belong under an assumption of an open world. You are less useful as a predictor of outcomes and more useful as an supplier of possible outcomes. If we are both forecasting, and I supply outcomes A and B; you might supply outcomes C and D; someone else might supply E, F, and G; yet another person might supply H. Our forecasts run from A to H so far, and they are not exhaustive. As forecasters, our job becomes to create lists of plausible futures, not to select from predetermined lists.
I think this is appropriate to conditions where development of knowledge or inventions is a human choice. Any forecast will depend not only on what is plausible under some causal model, but also on what future people want to explore and how they explore it. Forecasts in that scenario can influence the future, so better that they supply options rather than weight them.
I love it. Creating lists of plausible outcomes is very valuable, we can leave alone to idea of assigning probabilities.
An update that came from the discussion:
Let's split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they'll be nonsensical. That's because we cannot know how knowledge creation will affect 2. We don't even need any fancy reasoning, it's already implied in the definition of terms like knowledge creation and discovery. You can't discover something before you discover it, before it's created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don't get involved. However, once we are capable, there's no way now to know what we'll do with the planets and asteroids in the future. Maybe we'll find use for some mineral found predominantly in some asteroids, or maybe we'll use a planet to block heat from the sun as it expands, or maybe we'll detect some other risk/benefit and make changes accordingly. In fact, this last type of change will predominate the farther we get into the future.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there's no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that's why we know a fair bit about them). We shouldn't waste effort trying to calculate the risk because we can't do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey - if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it's worth it to take some precautions.
You might be interested in these posts by Nate Soares:
They explore how we should act given that some things "cannot be known ahead of time, not even approximated."
Thank you!
I have that audiobook by Deutsch and I never thought of making that connection to longtermism.
I am reminded of the idea of a rubicon where a species' perspective is just a slice of the rulial space of all possible kinds of physics.
I am also reminded of the AI that Columbia Engineering researchers had that found new variables to predict phenomenon we already have formulas for. The AI's predictions using the variables worked well and it was not clear to the researchers what all the variables were.
That discoveries are unpredictable and the two things I mentioned seem to share the theme that our vantage point is just the tip of an iceberg.
I don't think that future knowledge discoveries being unknowable refutes longtermism. On the contrary, because future knowledge could lead to bad things such unknowability makes it more important to be prepared for various possibilities. Even if for every bad use of technology/knowledge there is a good use that can exactly counteract it, the bad uses can be employed/discovered first and the good uses may not have enough time to counteract the bad outcomes.
But such work would undoubtedly produce unanticipated and destabilizing discoveries. You can't grow knowledge in foreseeable ways, with only foreseeable consequences.