Jamie_Harris

Managing Director @ Leaf
2393 karmaJoined Sep 2017Working (6-15 years)Archway, London N19, UK

Bio

Participation
5

Jamie is Managing Director at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history.

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
 

Comments
313

Topic contributions
1

I tried doing this a while back. Some things I think I worried about at the time:

(1) disheartening people excessively by sending them scores that seem very low/brutal, especially if you use an unusual scoring methodology (2) causing yourself more time costs than it seems like at first, because (a) you find yourself needing to add caveats or manually hide some info to make it less disheartening to people, (b) people ask you follow-up questions (3) exposing yourself to some sort of unknown legal risk by saying something not-legally-defensible about the candidate or your decision-making.

(1) turned out to be pretty justified I think, e.g. at least one person expressing upset/dissatisfaction at being told this info. (2) definitely happened too, although maybe not all that many hours in the grand scheme of things (3) we didn't get sued but who knows how much we increased the risk by

Thank you!

I understand the reasons for ranking relative to a given cost-effectiveness bar (or by a given cost-effectiveness metric). That provides more information than constraining the ranking to a numerical list so I appreciate that.

Btw, if you had 5-10 mins spare I think it'd be really helpful to add explanation notes to the cells in the top row of the spreadsheet. E.g. I don't know what "MEV" stands for, or what the "cost-effectiveness" or "cause no." columns are referring to. (Currently these things mean that I probably won't share the spreadsheet with people because I'd need to do a lot of explaining or caveating to them, whereas I'd be more likely to share it if it was more self-explanatory.)

Thanks! When you say "median in quality" what's the dataset/category that you're referring to? Is it e.g. the 3 ranked lists I referred to, or something like "anyone who gives this a go privately"?

Very helpful comment, thank you for taking the time to write out this reply and sharing useful reflections and resources!

First I think precise ranking of "cause areas"is nearly impossible as its hard to meaningfully calculate the "cost-effectiveness" of a cause, you can only accurately calculate the cost-effectiveness of an intervention which specifically targets that cause. So if you did want a meaningful rank, you at least need to have an intervention which has  probably already been tried and researched to some degree at least.

There's a lot going on here. I suspect I'm more optimistic than you that sharing uncertain but specific rankings is helpful for clarifying views and making progress? I agree in principle that what we want to do is evaluate specific actions ("interventions"), but I still think you can rank expected cost-effectiveness at a slightly more zoomed-out level, as long as you are comparing across roughly similar levels of abstraction. (Implicitly, you're evaluating the average intervention in that category, rather than a single intervention.) Given these things, I don't think I endorse the view that "you at least need to have an intervention which has probably already been tried and researched to some degree at least."

Secondly I think having public specific rankings has potential to be both meaningless and reputationally dangerous.

I agree with the reputational risks and the potential for people to misunderstand your claim or think that it's more confident than it is, etc. I somewhat suspect that this will be mitigated by there just being more such rankings though, as well as having clear disclaimers. E.g. at the moment, people might look at 80k and Open Phil rankings and conclude that there must be strong evidence behind the ratings. But if they see that there are 5 different ranked lists with only some amount of overlap, it's implicitly pretty clear that there's a lot of subjectivity and difficult decision-making going into this. (I don't agree with it being "meaningless" or "dishonest" -- I think that relates to the points above.)

Also I personally think  that GiveWell might  do the most work which achieves the substance of what you are looking for within global health and wellbeing. Also like you mentioned the Copenhagen Consensus also does a pretty good job of outlining what they think might be the 12 best interventions (best things first) with much reasoning and calculation behind each one.

Thanks a lot for these pointers! I will look into them more carefully. This is exactly the sort of thing I was hoping to receive in response to this quick take, so thanks a lot for your help. Best Things First sounds great and I've added it to my Audible wishlist. Is this what you have in mind for GiveWell? (Context: I'm not very familiar with global health.)

I'd be interested to hear what you think might be the upsides of "ranking" specifically vs clustering our best estimates at effective cause areas/interventions.

Oh this might have just been me using unintentionally specific language. I would have included "tiered" lists as part of "ranked". Indeed the Open Phil list is tiered rather than numerically ranked. Thank you for highlighting this though, I've edited the original post to add the word "tiered". (Is that what you meant by "clustering our best estimates at effective cause areas/interventions? Lmk if you meant something else.)

Thanks again!

Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of:

  1. (Systematically) exploring cause areas
  2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
  3. Sharing their list and reasons publicly.[2]

The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list.

Related things I appreciate, but aren't quite what I'm envisioning:

  • Tools and models like those by Rethink Priorities and Mercy For Animals, though they're less focused on explanation of specific prioritisation decisions.
  • Longlists of causes by Nuno Sempere and CEARCH, though these don't provide ratings, rankings, and reasoning.
  • Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation's broader prioritisation process.

There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus.

If you know of other public writeups and explanations of ranked lists, please share them in the comments![3]

  1. ^

    Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means.

  2. ^

    I'm a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain... and not at all systematic or thorough. I think I roughly:

    - Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming), 

    - Had that hypothesis worn down by various information and arguments I encountered and changed my views on the top causes

    - Didn't ever go back and do a systemic cause prioritisation exercise from first principles (e.g. evaluating cause candidates from a long-list that includes 'not-core-EA™-cause-areas' or based on criteria other than ITN).

    I suspect this is pretty common. I also worry people are deferring too much on what is perhaps the most fundamental question of the EA project.

  3. ^

    Rough and informal explanations welcome. I'd especially welcome any suggestions that come from a different methodology or set of worldviews & assumptions to 80k and Open Phil. I ask partly because I'd like to be able to share multiple different perspectives when I introduce people to cause prioritisation to avoid creating pressure to defer to a single list.

Oh my suggestion wasn't necessarily that they'realternatives to receiving any donations; they could be supplements. They could be things you experiment with that could help to make the channel more sustainable and secure.

Sad news for https://pivotalcontest.org/

(I'm shocked that EA now has two "Blue Dot"s and two "Pivotal"s -- neither of which has the words "effective", "institute", or "initiative" anywhere to be seen.)

It seems like video release frequency is a significant bottleneck for you?

I'm not sure what the main time costs are. But some guesses of things that might help:

E.g.

  • freelancers, as you say
  • going for less thoroughly edited videos
  • doing some crowdsourcing or having volunteers/collaborators help write scripts
  • using LLMs more in the writing
  • doing interviews, article readouts, or other formats that enable you to produce long-form content fairly quickly (perhaps mixed in with the existing formats)
  • just setting yourself aggressive targets and working it out as you go

(FWIW I feel slightly surprised they take so long to create, but I've never tried creating videos as high quality and engaging as yours need to be.)

Maybe there are also other routes to monetisation, e.g patreon, ads/sponsorships for videos (maybe from EA orgs), or pitching orgs on videos you could do for them on your channel that you otherwise wouldn't do.

Thanks a lot for this! I may reply in more detail later but I wanted to send a quick interim note; this is exactly the sort of useful feedback and info I was hoping to elicit with this post!

I don't disagree with any specific point in this but somewhat disagree with the overall thrust of the recommendation. I suspect most people could learn more (and more quickly) by trying out more specialised roles, especially in high-quality, established organisations with better mentorship and support networks.

(I've never been a uni group organiser so not sure what the mentorship and support networks are actually like; I'm mostly just guessing and extrapolating from my own experience having been a generalist researcher then running a talent search org covering multiple cause areas.)

I don't feel like I've learnt very much that's very useful over the past year or two. Probably similar amounts to when I was a teacher in a secondary school, and far less than when I was a researcher.

Load more