About this footnote:
============================
Carol Adams even informs us that:
Sebo and Singer flourish as academics in a white supremacist patriarchal society because others, including people of color and those who identify as women, are pushed down. (p. 135, emphasis added.)
Maybe treading on the oppressed is a crucial part of Singer’s daily writing routine, without which he would never have written a word? If there’s some other reason to believe this wild causal claim, we’re never told what it is.
=============================
Here's a potential more charitable interpretation of this claim. Adams might not be claiming:
"Singer personally performs some act of oppression as part of his writing process."
Adam's causal model might be more of the following:
"Singer's ideas aren't unusually good; there are lots of other people, including people of color and those who identify as women, who have ideas that are as good or better. But those other people are being pushed down (by society in general, not by Singer personally) which leaves that position open for Singer. If people of color and those who identify as women weren't oppressed, then some of them would be able to outcompete Singer, leaving Singer to not flourish as much."
Of course that depends on whether everyone else is also evacuating. For instance do we expect that if a tactical nuke is used in Ukraine a significant amount of the US population will be trying to evacuate? As has been mentioned before there was not a significant percentage of the US population trying to evacuate even during the Cuban Missile Crisis, and that was probably a much higher risk and more salient situation than we face now.
One thing that would be really useful in terms of personal planning, and maybe would be a good idea to have a top level post on, is something like:
What is P(I survive | I am in location X when a nuclear war breaks out)
for different values of X such as:
(A) a big NATO city like NYC
(B) a small town in the USA away from any nuclear targets
(C) somewhere outside the US/NATO but still in the northern hemisphere, like Mexico. (I chose Mexico because that's probably the easiest non-NATO country for Americans to get to)
(D) somewhere like Argentina or Australia, the places listed as being most likely to survive in a nuclear winter by the article here https://www.nature.com/articles/s43016-022-00573-0
(E) New Zealand, pretty much where everyone says is the best place to go?
Probably E > D > C > B > A, but by how much?
As others have said, even (B) (with a suitcase full of food and water and a basement to hole up in) is probably enough to avoid getting blown up initially, the real question is what happens later. It could be that all the infrastructure just gets destroyed, there's no more food, and everyone starves to death.
Of course another thing to take into account is that if I just decide to go somewhere temporarily and there's a war, I'll be stuck somewhere that's unfamiliar, where I may not speak the local language, and where I am not a citizen. Whether that is likely to affect my future prospects is unclear.
If it turns out that we'll be fine as long as we can survive the bombs and the fallout, that's one thing. But if we'll just end up starving to death unless we're in the Southern Hemisphere, then that is another thing.
(Does the possibility of nuclear EMP (electromagnetic pulse) attacks need to be factored in? I've heard claims like 'one nuke detonated in the middle of the USA at the right altitude would destroy almost all electronics in the USA', and maybe nearby countries would also be in the radius. If true, likely it would happen in a nuclear war. And of course that would also have drastic implications for survivability afterward. I don't know how reliable this is, though.)
Another important question is "how much warning will we have?" Even a day or two's worth of warning is enough to hop on the next flight south, but certainly there are some scenarios where we won't even have that much.
This was really helpful. I'm living in New York City and am also making the decision about when/whether to evacuate, so it was useful to see the thoughts of expert forecasters. I wouldn't consider myself an expert forecaster and don't really think I have much knowledge of nuclear issues, so here's a couple other thoughts and questions:
- I'm a little surprised that P(London being attacked | nuclear conflict) seemed so low since I would have expected that that would be one of the highest priority targets. What informed that and would you expect somewhere like NYC to be higher or lower than London? (NYC does have a military base, Fort Hamilton (https://en.wikipedia.org/wiki/Fort_Hamilton), although I'm not sure how much that should update my probability).
- It seems like a big contributor to the lower-than-expected risk is the fact that you could wait to evacuate if the situation looked like it was getting more serious - i.e. the "conditional on the above, informed/unbiased actors are not able to escape beforehand" I don't have a car so I would have to get on a bus or plane out which might take up to a day, I'm not sure how much that affects the calculation as I don't know what time frame they were thinking of - were they assuming you can just leave immediately whenever you want?
- It sounds like it does make sense to be monitoring the situation closely and be ready to evacuate on short notice if it looks like the risk of escalation has increased (after all that is what the calculation is based on). Does anyone have any suggestions of what I should be following/under what circumstances it would make sense to leave?
- Of course another factor here is whether lots of other people would be trying to leave at the same time. This might make it harder to leave especially if you were dependent on a bus, plane, uber, etc. to get out of there.
- Another question is where do you go? For instance in NYC, I could go to {a suburb of NY / upstate NY / somewhere even more remote in the US like northern Maine / a non-NATO country} all of which are more and more costly but might have more and more safety benefit. Are there reliable sources on what places would be the safest?
For what it’s worth, while Facebook’s Forecast was met with some amount of skepticism, I wouldn’t say it was “dismissed” out of hand.
To clarify, when I made the comment about it being "dismissed", I wasn't thinking so much about media coverage as I was about individual Facebook users seeing prediction app suggestions in their feed I was thinking that there are already a lot of unscientific and clickbait-y quizzes and games that get posted to Facebook, and was concerned that users might lump this in with those if it is presented in a similar way.
Yeah, they certainly would be reluctant to do that. But given that they already do fact-checking, it doesn’t seem impossible.
I agree, and I definitely admit that the existence of the Facebook Forecast app is evidence against my view. I was more focused on the idea that if the recommender algorithm is based on prediction scores, that would mean that Facebook's choice of which questions to use would affect the recommendations across Facebook.
I'm not an expert on social media or journalism, but just some fairly low-confidence thoughts - it seems like this is areally interesting idea, but it seems very odd to think of it as a Facebook feature (or other social media platform):
I wonder if it might make more sense to think of this as a feature on a website like FiveThirtyEight that already has an audience that's interested in probabilistic predictions and models. You could have a regular feature similar to The Riddler but for forecasting questions - each column could have several questions, you could have readers write in to make forecasts and explain their reasoning, and then publish the reasoning of the people who ended up most accurate, along with commentary.
You mention that:
Neither we nor they had any way of forecasting or quantifying the possible impact of [Extinction Rebellion]
and go on to talk about this is an example of the type of intervention that EA is likely to miss due to lack of quantifiability.
One think that would help us understand your point is to answer the following question:
If it's really not possible to make any kind of forecast about the impact of grassroots activism (or whatever intervention you would prefer), then on what basis do you support your claim that supporting grassroots activism would improve its impact? And how would you have any idea which groups or which forms of activism to fund, if there's no possible way of forecasting which ones will work?
I think the inferential gap here is that (we think that) you are advocating for an alternative way of justifying [the claim that a given intervention is impactful] other than the traditional "scientific" and "objective" tools (e.g. cost-benefit analysis, RCTs) , but we're not really sure what you think that alternative justification would look like or why it would push you towards grassroots activism.
I suspect that you might be using words like "scientific", "objective", and "rational" in a narrower sense than EAs think of them. For instance, EAs don't believe that "rationality" means "don't accept any idea that is not backed by clear scientific evidence," because we're aware that often the evidence is incomplete, but we have to make a decision anyway. What a "rational" person would say in that situation is something more like "think about what we would expect to see in a world where the idea is true compared to what we would expect to see if it were false, see which is closer to what we do see, and possibly also look at how similar things have turned out in the past."
A more charitable interpretation of the author's point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention "give medication X to people who have condition Y" is easy to test with an RCT. However, the intervention "change the culture to make outdoor exercise seem more attractive" is much harder to test: it's harder to target cultural change to a particular area (and thus it's harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it's not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
As for the question of "what do the authors consider to be root causes," here's my reading of the article. Consider the case of factory farming. Probably all of us agree that the following are all necessary causes:
(1) There's lots of demand for meat.
(2) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.
(3) Producers of meat just care about production efficiency and cost-effectiveness, not animal suffering.
I suspect you and other EAs focus on item (2) when you are talking about "root causes." In this case, you are correct that creating cheap plant-based meat alternatives will solve (2). However, I suspect the authors of this article think of (3) as the root cause. They likely think that if meat producers cared more about animal suffering, then they would stop doing factory farming or invest in alternatives on their own, and philanthropists wouldn't need to support them. They write:
if all investment was directed in a responsible way towards plant-based alternatives, and towards safe AI, would we need philanthropy at all
Furthermore, they think that since the cause of (3) is a focus on cost-effectiveness (in the sense of minimizing cost per pound of meat produced), then focusing on cost-effectiveness (in the sense of minimizing cost per life saved, or whatever) in philanthropy promotes more cost-effectiveness focused thinking, which makes (3) worse. And they think lots of problems have something like (3) as a root cause. This is what they mean when they talk about "values of the old system" in this quote:
By asking these questions, EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites.
As for the other quote you pulled out:
[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.
and the following discussion:
To be more concrete, I suspect what they're talking about is something like the following. Consider a potential philanthropist like Jeff Bezos - they likely believe that Amazon has harmed the world through their business practices. Let's say Jeff Bezos wanted to spend $10 billion of his wealth on philanthropy. There might be two ways of doing that:
(1) Donate $10 billion to worthy causes.
(2) Change Amazon's business practices such that he makes $10 billion less money, but Amazon has a more positive (or less negative) impact on the world.
My reading is that the authors believe (2) would be of higher value, but Bezos (and others like him) would be biased toward (1) for self-serving reasons: Bezos would get more direct credit for doing (1) than (2), and Bezos would be biased toward underestimating how bad Amazon's business practices are for the world.
---
Overall, though I agree with you that if my interpretation accurately describes the author's viewpoint, the article does not do a good job arguing for that. But I'm not really sure about the relevance of your statement:
My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.
Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty? I didn't get that from the article; one of their main points is that it's important to try things even if success is uncertain.