Sarah Weiler

Research fellow (AI Governance) @ Global Policy Research Group
509 karmaJoined Working (0-5 years)Innsbruck, Österreich
www.globalprg.org/aigovernanceprogram

Participation
4

  • Completed the Introductory EA Virtual Program
  • Completed the In-Depth EA Virtual Program
  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group

Sequences
1

Wrapping my head around the nuclear risks cause area

Comments
43

Thanks for explaining! In this case, I think I come away far less convinced by your conclusions (and the confidence of your language) than you seem to. I (truly!) find what you did admirable given the resources you seem to have had at your disposal and the difficult data situation you faced. And I think many of the observations you describe (e.g., about how orgs responded to your call; about donor incentives) are insightful and well worth discussing. But I also think that the output would be significantly more valuable had you added more nuance and caution to your findings, as well as a more detailed description of the underlying data & analysis methods.

But, as said before, I still appreciate the work you did and also the honesty in you answer here!

I have major reservations about your conclusion (in part because I embrace anti-fanaticism, in part because I see big challenges and some downsides to outsourcing moral reflection and decision-making to another person). However, I really appreciate how well you outlined the problem and I also appreciate that you don't shy away from proposing a possible solution, even while retaining a good measure of epistemic humility. Thanks for posting!

Thanks a lot for writing this up and sharing your evaluations and thinking! 

I think there is lots of value in on-the-ground investigations and am glad for the data you collected to shine more light on the Cameroonian experience. That said, reading the post I wasn't quite sure what to make of some of your claims and take-aways, and I'm a little concerned that your conclusions may be misrepresentating part of the situation. Could you share a bit more about your methodology for evaluating the cost-effectiveness of different organisations in Cameroon? What questions did these orgs answer when they entered your competition? What metrics and data sources did you rely on when evaluating their claims and efforts through your own research?

Most centrally, I would be interested to know: 1) Did you find no evidence of effects or did you find evidence for no effect[1]?; and 2) Which time horizon did you look at when measuring effects, and are you concerned that a limited time horizon might miss essential outcomes?

If you find the time, I'd be super grateful for some added information and your thoughts on the above! 

  1. ^

    The two are not necessarily the same and there's a danger of misrepresentation and misleading policy advice when equating them uncritically. This has been discussed in the field of evidence-based health and medicine, but I think it also applies to observational studies on development interventions like the ones you analyse: Ranganathan, Pramesh, & Buyse (2015): Common pitfalls in statistical analysis: "No evidence of effect" versus "evidence of no effect"; Vounzoulaki (2020): ‘No evidence of effect’ versus ‘evidence of no effect’: how do they differ?; Tarnow-Mordi & Healy (1999): Distinguishing between "no evidence of effect" and "evidence of no effect" in randomised controlled trials and other comparisons

Hi! Thanks for posting this, I think international relations/politics and different approaches to it receive too little attention in EA discussions and thinking and am happy to see contributions on the topic here on the forum! :))

However, your outline seems a bit overly reductive to me: Within international relations theory and discussions, the realist/idealist dichotomy has probably never existed in a pure form, and much less so since the end of the Second World War. Over the second half of the twentieth century until roughly today, these categories tend to be more reflective of how scholars and thinkers in the space classify themselves and their colleagues (see disciplinary overviews here, here, and here): 

  • Liberalism (or Liberal Institutionalism)
  • Neo-realism (and many variants thereof, such as defensive and offensive realism or neo-classical realism)
  • Constructivism (again, with various versions)
  • English School of IR
  • Critical theories (Marxist IR, Feminist IR, Post-colonial IR, Postmodern IR, etc)

Also, I think it's useful to point out that the contrast between "values" and "interests" can be quite misleading, since "interests" cannot be defined without some notion of "the good" and thus pursuing "national interests" also always requires some moral choice from the country in question (or from the country's leaders). In addition, people who advocate for a foreign policy that promotes human rights protection and/or other moral values abroad will often have the empirical conviction that this "idealist" promotion of values is in the national interest of their home country (because they think a world without extreme moral infringements is more conducive to overall peace, lower rates of transnational crime and terrorism, etc.). All of this makes me feel rather frustrated (and sometimes also annoyed) when I hear people use labels such as "realism" or "idealism", suggesting that the former is more empirically grounded or value-free (of course, this is not your fault as the author of this piece, since you didn't make up these terms and are simply describing how they are used by others in this space). 

Thanks for organising this and sharing the programme here! Is there any reason you did not put the price in the description posted here? I think that this is - at least for someone like myself - a significant decision criterion for a potential applicant, and it is a bit strange/inconvenient only to learn about it after filling in the entire application form.

(For other readers: the normal price is $550 for the entire programme, and there is an option to apply for financial support within the application form)

I don't draw precisely the same conclusions as you (I'm somewhat less reluctant to entertain strategies that aim to introduce untested systemic changes on a relatively large scale), but I really appreciate the clarity and humility/transparency of your comment, and I think you outline some considerations that are super relevant to the topic of discussion. Thanks for writing this up :)!

First two points sound reasonable (and helpfully clarifying) to me!

I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn't always enough for it to be the 'best'

I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, I'm not sure I'm able to exactly point at how these notions play into our intuitions and views on the topic - maybe it's something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/conclusion more readily? 

That said, I fully agree that "something being very important and neglected and moderately tractable (like x-risk work) isn't always enough for it to be the 'best' ". To figure out which option is best, we'd need to somehow compare their respective scores on importance, neglectedness, and tractability... I'm not sure actually figuring that out is possible in practice, but I think it's fair to challenge the claim that "action X is best because it is very important and neglected and moderately tractable" regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).

Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.

I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, I'm resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality. 

(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/empirics: part of my brain just rejects the objective of massively expanding and improving a company/situation that is already perfectly acceptable and satisfying. I don't know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/shapes/directs my thinking on the topic at least a bit - something for myself to think about more, since this may or may not be a concerning bias.)

I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. [...] As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable.

+100 :)

  1. ^

    (This is not to say that it might not make sense for one or a few individuals to think about the company's mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because it's not sustainable to preserve the company in a day-to-day survival strategy forever; but I think that's different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)

Thanks for writing this up, Oscar! I largely disagree with the (admittedly tentative) conclusions, and am not sure how apt I find the NIMBY analogy. But even so, I found the ideas in the post helpfully thought-provoking, especially given that I would probably fall into the cosmic NIMBY category as you describe it. 

First, on the implications you list. I think I would be quite concerned if some of your implications were adopted by many longtermists (who would otherwise try to do good differently):

Support pro-expansion space exploration policies and laws

Even accepting the moral case for cosmic YIMBYism (that aiming for a large future is morally warranted), it seems far from clear to me that support for pro-expansion space exploration policies would actually improve expected wellbeing for the current and future world. Such policies & laws could share many of the downsides colonialism and expansionism have had previously: 

  • Exploitation of humans & the environment for the sake of funding and otherwise enabling these explorations; 
  • Planning problems: Colonial-esque megaprojects like massive space exploration likely constitute a bigger task than human planners can reasonably take on, leading to large chances of catastrophic errors in planning & execution (as evidenced by past experiences with colonialism and similarly grand but elite-driven endeavours)
  • Power dynamics: Colonial-esque megaprojects like massive space exploration seem prone to reinforcing the prestige, status, and power for those people who are capable of and willing to support these grand endeavours, who - when looking at historical colonial-esque megaprojects - do not have a strong track record of being the type of people well-suited to moral leadership and welfare-enhancing actions (you do acknowledge this when you talk about ruthless expansionists and Molochian futures, but I think it warrants more concern and worry than you grant);
  • (Exploitation of alien species (if there happened to be any, which maybe is unlikely? I have zero knowledge about debates on this)).

This could mean that it is more neglected and hence especially valuable for longtermists to focus on making the future large conditional on there being no existential catastrophe, compared to focusing on reducing the chance of an existential catastrophe.

It seems misguided and, to me, dangerous to go from "extinction risk is not the most neglected thing" to "we can assume there will be no extinction and should take actions conditional on humans not going extinct". My views on this are to some extent dependent on empirical beliefs which you might disagree with (curious to hear your response there!): I think humanity's chances to avert global catastrophe in the next few decades are far from comfortably high, and I think the path from global catastrophe to existential peril is largely unpredictable but it doesn't seem completely unconceivable that such a path will be taken. I think there are far too few earnest, well-considered, and persistent efforts to reduce global catastrophic risks at present. Given all that, I'd be quite distraught to hear that a substantial fraction (or even a few members) of those people concerned about the future would decide to switch from reducing x-risk (or global catastrophic risk) to speculatively working on "increasing the size of the possible future", on the assumption that there will be no extinction-level event to preempt that future in the first place.

--- 

On the analogy itself: I think it doesn't resonate super strongly (though it does resonate a bit) with me because my definition of and frustration with local NIMBYs is different from what you describe in the post. 

In my reading, NIMBYism is objectionable primarily because it is a short-sighted and unconstructive attitude that obstructs efforts to combat problems that affect all of us; the thing that bugs me most about NIMBYs is not their lack of selflessness but their failure to understand that everyone, including themselves, would benefit from the actions they are trying to block. For example, NIMBYs objecting to high-rise apartment buildings seem to me to be mistaken in their belief that such buildings would decrease their welfare: the lack of these apartment buildings will make it harder for many people to find housing, which exacerbates problems of homelessness and local poverty, which decreases living standards for almost everyone living in that area (incl. those who have the comfort of a spacious family house, unless they are amongst the minority who enjoy or don't mind living in the midst of preventable poverty and, possibly, heightened crime). It is a stubborn blindness to arguments of that kind and an unwillingness to consider common, longer-term needs over short-term, narrowly construed self-interests that form the core characteristic of local NIMBYs in my mind. 

The situation seems to be different for the cosmic NIMBYs you describe. I might well be working with an unrepresentative sample, but most of the people I know/have read who consciously reject cosmic YIMBYism do so not primarily on grounds of narrow self-interest but for moral reasons (population ethics, non-consequentialist ethics, etc) or empirical reasons (incredibly low tractability of today's efforts to influence the specifics about far-future worlds; fixing present/near-future concerns as the best means to increase wellbeing overall, including in the far future). I would be surprised if local NIMBYs were motivated by similar concerns, and I might actually shift my assessment of local NIMBYism if it turned out that they are. 

New Update (as of 2024-03-27): This comment, with its very clear example to get to the bottom of our disagreement, has been extremely helpful in pushing me to reconsider some of the claims I make in the post. I have somewhat updated my views over the last few days (see the section on "the empirical problem" in the Appendix I added today), and this comment has been influential in helping me do that. Gave it a Delta for that reason; thanks Jeff!

While I now more explicitly acknowledge and agree that, when measured in terms of counterfactual impact, some actions can have hundreds of times more impact than others, I retain a sense of unease when adopting this framing:

When evaluating impact differently (e.g. through Shapley-value-like attribution of "shares of impact", or through a collective rationality mindset (see comments here and here for what I mean by collective rationality mindset)), it seems less clear that the larger donor is 100x more impactful than the smaller donor. One way for reasoning about this would be something like: Probably - necessarily? - the person donating $100,000 had more preceding actions leading up to the situation where she is able and willing to donate that much money and there will probably - necessarily? - be more subsequent actions needed to make the money count, to ensure that it has positive consequences. There will then be many more actors and actions between which the impact of the $100,000 donation will have to be apportioned; it is not clear whether the larger donor will appear vastly more impactful when considered from this different perspective/measurement strategy...

You can shake your head and claim - rightly, I believe - that this is irrelevant for deciding whether donating $100,000 or donating $1,000 is better. Yes, for my decision as an individual, calculating the possible impact of my actions by assessing the likely counterfactual consequences resulting directly from the action will sometimes be the most sensible thing to do, and I’m glad I’ve come to realise that explicitly in response to your comment.

But I believe recognising and taking seriously the fact that, considered differently, my choice to donate $100,000 does not mean that I individually am responsible for 100x more impact than the donor of $1,000 can be relevant for decisions in two ways:

  • 1) It prevents me from discounting and devaluing all the other actors that contribute vital inputs (even if they are “easily replaceable” as individuals)
  • 2) It encourages me to take actions that may facilitate, enable, or support large counterfactual impact by other people. This perspective also encourages me to consider actions that may have a large counterfactual impact themselves, but in more indirect and harder-to-observe ways (even if I appear easily replaceable in theory, it's unclear whether I will be replaced in practice, so the counterfactual impact seems extremely hard to determine; what is very clear is that by performing a relevant supportive action, I will be contributing something vital to the eventual impact).

If you find the time to come back to this so many days after the initial post, I'd be curious to hear what you think about these (still somewhat confused?) considerations :)

Thanks a lot for that comment, Dennis. You might not believe it (judging by your comment towards the end), but I did read the full thing and am glad you wrote it all up!

I come away with the following conclusions:

  1. It is true that we often credit individuals with impacts that were in fact the results of contributions from many people, often over long times. 
  2. However, there are still cases where individuals can have outsize impact compared to the counterfactual case where they do not exist. 
  3. It is not easy to say in advance which choices or which individuals will have these outsize influences …
  4. … but there are some choices which seem to greatly increase the chance of being impactful. 

Put in this way, I have very little to object. Thanks for providing that summary of your takeaways, I think that will be quite helpful to me as I continue to puzzle out my updated beliefs in response to all the comments the essay has gotten so far (see statements of confusion here and here). 

For example, anyone who thinks that being a great teacher cannot be a super-impactful role is just wrong. But if you do a very simplistic analysis, you could conclude that. It’s only when you follow through all the complex chain of influences that the teacher has on the pupils, and that the pupils have on others, and so on, that you see the potential impact.

That's interesting. I think I hadn't really considered the possibility of putting really good teachers (and similar people-serving professions) into the super-high-impact category, and then my reaction was something like "If obviously essential and super important roles like teachers and nurses are not amongst the roles a given theory considers relevant and worth pursuing, then that's suspicious and gives me reason to doubt the theory." I now think that maybe I was premature in assuming that these roles would necessarily lie outside the super-high-impact category?

The real question, even of not always posed very precisely, is: for individuals who, for whatever reason, finds themselves in a particular situation, are there choices or actions that might make them 100x more impactful? [...] And yet, it feels like there are choices we make which can greatly increase or decrease the odds that we can make a positive and even an outsize contribution. And I’m not convinced by (what I understand to be) your position that just doing good without thinking too much about potential impact is the best strategy.

I think the sentiment behind those words is one that I wrongfully neglected in my post. For practical purposes, I think I agree that it can be useful and warranted to take seriously the possibility that some actions will have much higher counterfactual impact than others. I continue to believe that there are downsides or perils to the counterfactual perspective, and that it misses some relevant features of the world; but I can now also see more clearly that there are significant upsides to that same perspective and that it can often be a powerful tool for making the world better (if used in a nuanced way). Again, I haven't settled on a neat stance to bring my competing thoughts together here, but I feel like some of your comments above will get me closer to that goal of conceptual clarification - thanks for that!

Load more