S

sella

284 karmaJoined

Bio

I currently lead Google's Flood Forecasting Initiative alongside several other humanitarian and climate-related efforts, I'm the co-founder and head of research at Probably Good, the founder and head of the board of Effective Altruism Israel, teach Applied Ethics and Information Security at Tel Aviv University, am a venture partner at the VC firm Firstime which invests in climate startups, and am on the advisory board of ALLFED.

Comments
18

Hey Oscar, I am indeed reading this! (albeit a bit late)

First, I really appreciate you looking into this and writing this. I'm excited to see people explore more cause areas and give rough estimates of how promising they are.

There are quite a few details in the cost-effectiveness analysis that I think are inaccurate, and I've mentioned a few at the bottom of the comment in case it is of interest. However, I still think this is a good and valuable shallow investigation. If there weren't discrepancies between the conclusions from 50 hours of investigation and 6 years of work by different people that would be quite surprising (and Google would have wasted quite a lot of money on external analyses, RCTs, etc.).

Details aside, I broadly agree with what you wrote in this comment. There's a big difference between statements about the promisingness of a field as a whole, vs. claiming that there aren't uniquely impactful opportunities in a field - and I think we have a uniquely impactful opportunity. I think flooding is an incredibly important problem and there are many organizations I would love to see investing more in it, but I would not make the case that this should be a top priority focus area for GiveWell / OpenPhil / CE.

It’s also worth noting explicitly that even if I had more meaningful disagreements with the conclusions of this investigation I’d still be very appreciative of it. These questions are complex, multiple perspectives are incredibly valuable, and even well-informed intelligent people can disagree.

Finally, just in case it is of interest and useful, I’ll note the largest drivers of where my estimates diverge from those in the report:

  • Total impact - the EMDAT is definitely a candidate for the most comprehensive source of recorded flood (and other natural disaster) deaths, but it’s not a great source for an estimate of total deaths. This is partially due to undercounting of deaths (and other estimates that use EMDAT alongside other sources find higher numbers[1]<sup><strong>,</strong></sup>[2]), but more significantly due to these types of counts only counting immediate deaths (due to reasons such as trauma) and not due to longer term health effects[3]. Tracking the relationship between floods and all-cause mortality (or other health indicators)[4] leads to vastly larger numbers. Both these methodologies are probably far from the true numbers, but in opposite directions. The World Bank report you linked to thinks warning systems can save 23,000 lives per year, and though this may be an overestimate of the number of lives that would be saved by that plan, I think it might indeed be a reasonable lower bound on the number of lives lost annually in total. My current best estimate of fatalities is about an order of magnitude higher than yours.

  • Comparison between top intervention to all harms vs. all investment - we both already commented on how unique opportunities might be more promising than the field as a whole. I want to make a related but broader point here, because I’m worried the methodology here will systematically bias us in favor of existing cause areas over new ones. At least in parts, your analysis asks how much funding it would take to address all flood harms globally. You then compare your estimate of the cost-effectiveness of that to GiveDirectly, one of the top interventions globally, which invests immensely in targeting[5] to maximize its cost-effectiveness. Due to diminishing returns, almost any domain you’d look into would be very cost-ineffective to try and completely solve the problem globally (for example, trying to solve all poverty via cash transfers would also be far less cost-effective than GiveDirectly’s current margin). Specifically in floods this can lead to a difference in orders of magnitude. While on average the cost-benefit ratio of early warning systems has been estimated as 1:9[6], but in highly-affected low income countries the cost-benefit ratio can rise to the hundreds[7]. Also, I won’t name names, but most $100M+ programs in this space that I’ve seen were never actually completed or used - so the average cost-effectiveness numbers in this space are very far from the effectiveness of a well-functioning organization doing good work. Concretely, I think that your statements might be more than an order of magnitude off if we’re considering investing in promising projects to mitigate flood harms (focusing on severely affected low and middle income countries, early warning, and organizations/solutions with a strong track record).

  • Note: my views are disproportionately influenced by work on early warning systems, which are only a part of the work you’ve aimed to review, and you noted towards monitoring, forecasting, and alerting, which are the areas I’ve been most involved in, while your report touched on other areas in flood management as well.

Finally, you might also be interested in a report from 2016 by CEA, which also includes a review of the cost-effectiveness of flood management. I think it misses different nuances but again provides another interesting perspective.

I have A LOT more things to say about the empirical statements, framework for evaluation, and assumptions that went into this - happy to chat if you’re interested.


  1. A digitized global flood inventory (1998–2008): compilation and preliminary results ↩︎

  2. The Human Impact of Floods: a Historical Review of Events 1980-2009 and Systematic Literature Review ↩︎

  3. Health Risks of Flood Disasters ↩︎

  4. Health Effects of Flooding in Rural Bangladesh ↩︎

  5. Study: AI targeting helped reach more of the poorest people in Togo ↩︎

  6. Global Commission on Adaptation's Adapt Now report ↩︎

  7. Background Paper on Assessment of the Economics of Early Warning Systems for Disaster Risk Reduction ↩︎

Thank you for flagging this! I've now corrected the links.

Hi Brian, thanks for the feedback. While we do hope to add other indicators of credibility, we don’t plan on featuring Effective Altruism Israel specifically in the website. Though both Omer and I are heavily involved in EA Israel, and though it seems likely that Probably Good would not exist had EA Israel not existed, it is a separate org and effort from EA Israel. It is “supported by EA Israel” in the sense that I think members of EA Israel are supportive of the project (and I hope members of many other communities are too), but it is not “supported by EA Israel” in the sense that we receive any funding or resources from the group. I mention this mainly because our mission and intended audience are global, and connecting the website or organization with EA Israel may lead to confusion or discourage those who are not from Israel from engaging with us.

Hi Peter, thanks for these suggestions!

I hadn’t seen the doc you linked to before, and is indeed a good starting point. We’re actively working on our internal M&E strategy at the moment, so this is particularly useful to us right now.

I agree with the other suggestions, and those are already planned. Their full implementation might take a while, but I expect us to have some updates related to this soon. 

Thanks for this detailed feedback, I’m happy to hear you think the article would be useful to people in situations you’ve been in. All three of the points you raised seem reasonable - some touch on nuances that I already have down in my full notes but were dropped for brevity, while others are things we hadn’t heard yet from the people we interviewed (including those acknowledged in the article, and several others who preferred to remain anonymous). Based on consultation with others I’ll look into incorporating some of these nuances, though I apologize in advance that not all nuances will be incorporated.

We’re definitely taking into account the different comments and upvotes on this post. We appreciate people upvoting the views they’d like to support - this is indeed a quick and efficient way for us to aggregate feedback.

We’ve received recommendations against opening public polls about the name of the organization from founders of existing EA organizations, and we trust those recommendations so we’ll probably avoid that route. But we will likely look into ways we can test the hypothesis of whether a “less controversial” name has positive or negative effects on the reaction of someone hearing this name for the first time.

Hi Manuel, thanks for this comment. I think I agree with all your considerations listed here. I want to share some thoughts about this, but as you’ve mentioned - this is one of our open questions and so I don’t feel confident about either direction here.

First, we have indeed been giving general career coaching for people in Israel for several years now, so in a sense we are implementing your recommended path and are now moving onto the next phase of that plan. That being said, there still remain reasons to continue to narrow our scope even at this stage.

Second, you mention partnering with experts in the various cause areas to ensure accurate content - I completely agree with this, and wouldn’t dream of providing concrete career advice independently in fields I don’t have experience in. In the content we are writing right now we require interviewing at least 7 experts in the field to provide high-confidence advice, and at least 3 experts in the field even for articles we mark as low confidence (of which we warn people to be careful about). So it’s really important to me to clarify that none of the concrete career-specific advice we provide will be based exclusively on our own opinions or knowledge - even within the fields we do have experience in.

Finally, I think at least some of the issues you’ve (justifiably) raised are mitigated by the way we aim to provide this advice. As opposed to existing materials, which more confidently aim to provide answers to career-related questions, we have a larger emphasis on providing the tools for making that decision depending on your context. As community organizers, one of the things that pushed us to start this effort is the feeling that many people, who don’t happen to be from the (very few) countries that EA orgs focus on, have very little guidance and resources, while more and more is invested in optimizing the careers of those within those countries. We believe that doing highly focused work on Israel would not serve the community as well as providing guidance on what needs to be explored and figured out to apply EA career advice to your own context. As such, we want to provide recommendations on how to check for opportunities within the scope that’s relevant to you (e.g. country or skillset), rather than aiming to provide all the answers as final conclusions on our website. This applies most to our career guide, but also to specific career path profiles - where we want to first provide the main considerations one should look into, so that we provide valuable preliminary guidance for a wide range of people, rather than end-to-end analysis for fewer people.

The mitigations described above can be much better evaluated once we have some materials online, which will allow others to judge their implementation (and not only our aspirations). We plan on soliciting feedback from the community before we begin advocating for them in any meaningful way - hopefully that will help make these responses less abstract and still leave us time to collect feedback, consider it and try to optimize our scope and messaging.

Hi Jack, thanks for the great question. 

In general, I don’t think there’s one best approach. Where we want to be on the education \ acceptance trade-off depends on the circumstances. It might be easiest to go over examples (including ones you gave) and give my thoughts on how they’re different.

First, I think the simplest case is the one you ended with. If someone doesn’t know what cause area they’re interested in and wants our help with cause prioritization, I think there aren’t many tradeoffs here - we’d strongly recommend relevant materials to allow them to make intelligent decisions on how to maximize their impact. 

Second, I want to refer to cases where someone is interested in cause areas that don’t seem plausibly compatible with EA, broadly defined. In this case we believe in tending towards the “educate” side of the spectrum (as you call it), though in our writing we still aim not to make it a prerequisite for engaging with our recommendations and advice. That being said, these nuances may be irrelevant in the short-term future (at least months, possibly more), as due to prioritization of content, we probably won’t have any content for cause areas that are not firmly within EA.

In the case where the deliberation is between EA cause areas (as is the case in your example) there are some nuances that will probably be more evident in our content even from day one (though may change over time). Our recommended process for choosing a career will involve engaging with important cause prioritization questions, including who deserves moral concern (e.g. those far from us geographically, non-human animals, and those in the long term future). Within more specific content, e.g. specific career path profiles, we intend to refer to these considerations but not try and force people to engage with them. If I take your global health example, in a career path profile about development economics we would highlight that one of the disadvantages of this path is that it is mainly promising from a near-term perspective and unclear from a long-term perspective, with links to relevant materials. That being said, someone who has decided they’re interested in global health, doesn’t follow our recommended process for choosing a career, and navigates directly to global health-related careers will primarily be reading content related to this cause area (and not material on whether this is the top cause area). Our approach to 1:1 consultation is similar - our top recommendation is for people to engage with relevant materials, but we are willing to assist people with more narrow questions if this is what they’re interested in (though, much like the non-EA case, we expect to be in over-demand in the foreseeable future, and may in practice prioritize those who are pursuing all avenues to increasing their impact). 

Hope this provides at least some clarity, and let me know if you have other questions.

I agree this is an important question that would be of value to other organizations as well. We’ve already consulted with 80K, CE and AAC about it, but still feel this is an area we have a lot more work to do on. It isn’t explicitly pointed out in our open questions doc, but when we talk about measuring and evaluating our counterfactual benefits and harms, this question has been top of mind for us.

The short version of our current thinking is separated into short-term measurement and long-term measurement. We expect that longer term this kind of evaluation will be easier - since we’ll at least have career trajectories to evaluate. Counterfactual impact estimation is always challenging without an experimental set up which is hard to do at scale, but I think 80K and OpenPhil have put out multiple surveys that try to and extract estimates of counterfactual impact and do so reasonably well given the challenges, so we’ll probably do something similar. Also, at that point, we could compare our results to theirs, which could be a useful barometer. In the specific context of our effect on people taking existing priority paths, I think it’ll be interesting to compare the chosen career paths of people who have discovered 80K through our website relative to those who discovered 80K from other sources. 

Our larger area of focus at the moment is how to evaluate the effect of our work in the short term, when we can’t yet see our long-term effect on people’s careers. We plan on measuring proxies, such as changes to their values, beliefs and plans. We expect whatever proxy we use in the short term to be very noisy and based on a small sample size, so we plan on relying heavily on qualitative methods. This is one of the reasons we reached out to a lot of people who are experienced in this space (and we’re incredibly grateful they agreed to help) - we think their intuition is an invaluable proxy to figuring out if we’re heading in the right direction.

This is an area that we believe is important and we still have a lot of uncertainty about, so additional advice from people with significant experience in this domain would be highly appreciated.

Load more