Michael Townsend

Researcher @ Giving What We Can
1713 karmaJoined Working (0-5 years)Seaforth NSW 2092, Australia

Bio

Researcher at Giving What We Can.

Posts
10

Sorted by New

Comments
64

Topic contributions
1

I don't have any particularly strong views, and would be interested in what others think.

Broadly, I feel like I agree that more specificity/transparency is helpful, though I don't feel convinced that it's not also worth asking at some stage in the application an open-ended question like "Why are you interested in the role?". Not sure I can explain/defend my intuitions here much right now but I would like to think more on it when I get around to writing some reflections on the Research Communicator hiring process. 

I'm not sure I follow what you mean by transparency in this context. Do you mean being more transparent about what exactly we were looking for? In our case we asked for <100 words on "Why are you interested in this role?" and "Briefly, what is your experience with effective giving and/or effective altruism?" and we were just interested in seeing if applicants' interest/experienced aligned with the skills, traits and experience we listed in the job descriptions.

In the hiring round I mentioned, we did time submissions for the work tests, and at least my impression is we found a way of doing so worked out fairly well. Having a timed component for the initial application is also possible, but might require more of an 'honour code' system as setting up a process that allows for verification of the time spent is a pretty a big investment for the first stage of an application. 

As a former applicant for many EA org roles, I strongly agree! I recall spending on average 2-8 times longer on some initial applications than was estimated by many job ads. 

As someone who just helped drive a hiring process for Giving What We Can (for a Research Communicator role) I feel a bit daft having experienced it on the other side, but not having learned from it. I/we did not do a good enough job here. We had a few initial questions that we estimated would take ~20-60 minutes, and in retrospect I now imagine many candidates would have spent much longer than this (I know I would have). 

Over the coming month or so I'm hoping to draft a post with reflections on what we learned from this, and how we would do better next time (inspired by Aaron Gertler's 2020 post on hiring a copyeditor for CEA). I'll be sure to include this comment and its suggestion (having a link at the end of the application form where people can report how long it actually took to fill the form in) in that post. 

Thanks for this post! I appreciate your writing, and also appreciated including images in your post -- it made it more fun to read. 

I wrote some feedback privately which the author thought would be good to share publicly, so this is a lightly edited version of that feedback:

  • The post was quite long, taking 10-15 minutes or so for me to read. I think this was because you wrote this in quite a careful way, including caveats, counterarguments, etc., and I'm not sure all this was necessary.
  • I think a shorter (~1/3rd the length) post which just explained what convenience meant using a few examples could have been better. In particular, it would be useful to emphasise examples where existing terminology fails but where 'convenience' succeeds.
  • On that last point: I can't immediately think of an example where 'convenience' would be helpful (except for times I would already use the word 'convenience') and so I don't feel sold on the term. I also think we should have a very high bar for adding jargon. In the examples you gave, I think I generally either: prefer the original sentence you included, would already use the term convenient (if it came to mind), or think there's a better way of conveying the same meaning using a different term.
  • To combine the few comments above: I think it's difficult to decide which jargon will be helpful from the armchair. So I think rather than a carefully made argument for the uptake of a particular term, I think it's better to just define the term and put it out there (with a few examples) -- if it's useful enough, people will use it; if not, it probably won't catch on (and I don't think a careful argument would have made the difference).
  • I found the convenience accounting part quite confusing. Specifically, I don't get how the concept of convenience helps do this kind of accounting, and (as I think you seem to believe based on your "accountant foolishly trying to list...") I don't think this accounting is actually helpful for most decisions.
  • I really like the general concept of trying to keep track of what is and is not convenient to you, your organisations, others around you, etc. I appreciated you giving such honest examples of your own conveniences. I'm not sure you needed the term to do this, but I do think it's good practice.

Thanks for conducting this impact assessment, for sharing this draft with us before publishing it, and for your help with GWWC's own impact evaluation! A few high-level comments (as a researcher at GWWC):

  • First, just reiterating that we appreciate others checking our assumptions and sharing their views on them.
  • As other commenters have discussed, we don't think it makes sense to only account for our influence on longtermist donations. We'd like to do a better job explaining our views here, which we see as similar to Open Philanthropy's worldview diversification.
  • I also appreciate your acknowledgements of the limitations of your approach (some of which are similar to ours) in that you have not modelled our potential indirect benefits -- which may well be the driver of our impact. 

Regarding the difference between how you have modelled the value of the GWWC Pledge versus how we did so:

  • As a quick-summary for others: the key difference is that GWWC's impact evaluation worked out the value of the pledge by looking at GWWC Pledgers as an overall cohort, and looking at the average amount donated by Pledgers each year, over their Pledge tenure. The analysis in this evaluation (explained in the post) looks at Pledgers as individuals and models them each in turn, and takes the average of those models. (Please correct me if I'm wrong here!).
  • Consequently, this approach uses a 'richer' set of information, though I also see it as requiring more assumptions (that the rules for extrapolating each individual Pledgers' giving are in fact correct). Whereas our approach uses less information, but only assumes that -- on average -- past data will be indicative of future data. I'd be interested in whether you think this is a fair summary.
  • I have some intuitions that GWWC's approach is more robust, but that this one -- if done well -- could potentially be more valid. They're just intuitions though, and I haven't thought too deeply about it. 
  • I find it interesting that this approach appears to lead to more optimistic conclusions abut GWWC's impact (despite the way it 'bounds' how any individual Pledgers' giving can be extrapolated over time). 

Thanks again for your work!

Hi Michael, thank you for the response

No problem!

Regarding:

Also, wouldn't the above 'x-risk discount rate' be 2% rather than 0.2%?

There was a typo in my answer before: (1 - ((1 - 1/6)^(1/100)) = 0.0018) which is ~0.2% (not 0.2), and is a fair amount smaller than the discount rate we actually used (3.5%). Still, if you assigned a greater probability of existential risk this century than Ord does, you could end up with a (potentially much) higher discount rate. Alternatively, even with a high existential risk estimate, if you thought we were going to find more and more cost-effective giving opportunities as time goes on, then at least for the purpose of our impact evaluation, these effects could cancel out. 

I think if we spent more time trying to come to an all-things-considered view on this topic, we'd still be left with considerable uncertainty, and so I think it was the right call for us to just acknowledge to take the pragmatic approach of deferring to the Green Book. 

In terms of the general tension between potentially high x-risk and the chance of transformative AI, I can only speak personally (not on behalf of GWWC). It's something on my mind, but it's unclear to me what exactly the tension is. I still think it's great to move money to effective charities across a range of impactful causes, and I'm excited about building a culture of giving significantly and effectively throughout one's life (i.e., via the Pledge). I don't think GWWC should pivot and become specifically focused on one cause (e.g., AI) and otherwise I'm not sure exactly what the potential for transformative AI should imply for GWWC. 

Hi Phib, Michael from the GWWC Research team here! In our latest impact evaluation we did need to consider how to think about future donations. We explain how we did this in the appendix "Our approach to discount rates". Essentially, it's a really complex topic, and you're right that existential risk plays into it (we note this as one of the key considerations). If you discount the future just based on Ord's existential risk estimates, based on some quick-maths, the 1 in 6 chance over 100 years should discount each year by 0.2% (1 - ((1 - 1/6)^(1/100)) = 0.02). 

Yet there are many other considerations that also weigh into this, at least from GWWC's perspective. Most significantly is how we should expect the cost-effectiveness of charities to change over time.

We chose to use a discount rate of 3.5% for our best-guess estimates (and 5% for our conservative estimates); based on the recommendation from the UK government’s green book. We explain why we made that decision in our report. It was largely motivated by our framework of being useful/transparent/justifiable over being academically correct and thorough.

 If you're interested in this topic, and on how to think about discount rates in general, you may find Founders Pledge's report on investing to give an interesting read.

Hi Joel — great questions! 

(1) Are non-reporters counted as giving $0?
Yes — at least for recorded donations (i.e., the donations that are within our database). For example, in cell C41 of our working sheet, we provide the average recorded donations of a GWWC Pledger in 2022-USD ($4,132), and this average assumes non-reporters are giving $0. Similarly, in our "pledge statistics" sheet, which provides the average amount we record being given per Pledger per cohort, and by year, we also assumed non-reporters are giving $0.

(2) Does this mean we are underestimating the amount given by Pledgers?
Only for recorded donations — we also tried to account for donations made but that are not in our records. We discuss this more here but in sum, for our best guess estimates, we estimated that our records only account for 79% of all pledge donations, and therefore we need to make an upwards adjustment of 1.27 to go from recorded donations to all donations made. We discuss how we arrived at this estimate pretty extensively in our appendix (with our methodology here being similar to how we analysed our counterfactual influence). For our conservative estimates, we did not make any recording adjustments, and we think this does underestimate the amount given by Pledgers.

(3) How did we handle nonresponse bias and could we handle it better?
When estimating our counterfactual influence, we explicitly accounted for nonresponse bias. To do so, we treated respondents and nonrespondents separately, assuming a fraction of influence on nonrespondents compared to respondents for all surveys:

  • 50% for our best-guess estimates.
  • 25% for our conservative estimates.

We actually did consider adjusting this fraction depending on the survey we were looking at, and in our appendix we explain why we chose not to in each case. Could we handle this better? Definitely! I really appreciate your suggestions here — we explicitly outline handling nonresponse bias as one of the ways we would like to improve future evaluations.

(4) Could we incorporate population base rates of giving when considering our counterfactual influence?
I'd love to hear more about this suggestion, it's not obvious to me how we could do this. For example, one interpretation here would be to look at how much Pledgers are giving compared to the population base rate. Presumably, we'd find they are giving more. But I'm not sure how we could use that to inform our counterfactual influence, because there are at least two competing explanations for why they are giving more:

  • One explanation is that we are simply causing them to give more (so we should increase our estimated counterfactual influence).
  • Another is that we are just selecting for people who are already giving a lot more than the average population (in which case, we shouldn't increase our estimated counterfactual influence).

But perhaps I'm missing the mark here, and this kind of reasoning/analysis is not really what you were thinking of. As I said, would love to hear more on this idea.

(Also, appreciate your kind words on the thoroughness/robustness)

Thanks :)!

You can see in the donations by cause area a breakdown of the causes pledge and non-pledge donors give to. This could potentially inform a multiplier for the particular cause areas. I don't think we considered doing this, and am not sure it's something we'll do in future, but we'd be happy to see others' do this using the information we provide. 

 Unfortunately, we don't have a strong sense in how we influenced which causes donors gave to; the only thing that comes to mind is our question: "Please list your best guess of up to three organisations you likely would *not* have donated to if Giving What We Can, or its donation platform, did not exist (i.e. donations where you think GWWC has affected your decision)" the results of which you can find on page 19 of our survey documentation here. Only an extremely small sample of non-pledge donors responded to the question, though. Getting a better sense of our influence here, as well as generally analysing trends in which cause areas our donors give to, is something we'd like to explore in our future impact evaluations.

Load more