Jamie_Harris

Grantmaking @ Polaris Ventures and EA Infrastructure Fund
2714 karmaJoined Working (6-15 years)London N19, UK

Bio

Participation
5

Jamie is a Program Associate at Polaris Ventures, doing grantmaking to support projects and people aiming to build a future guided by wisdom and compassion for all. Polaris' focus areas include AI governance, digital sentience, plus reducing risks from fanatical ideologies and malevolent actors.

He also spend a few hours a week as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

Lastly, Jamie is President of Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
 

Comments
350

Topic contributions
1

Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.

Yes, very plausible that the example interventions don't really get to the core of the issue -- I didn't spend long creating those and they're more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.

Re "centralized control and disbursion of funds": I agree that my example ideas in the epistemics section wouldn't help with this much. Would the "funding diversification" suggestions below help here?

And I'm intrigued if you're up for elaborating why you don't think the sorts of "What could be done?" suggestions would help with the other two problems you highlight. (They're not optimising for addressing those two specific concerns of course, but insofar as they all relate back to bad/weird epistemic practices, then things like epistemics training programmes might help?) No worries if you don't want to or don't have time though.

Thanks again!

I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.

EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:

  • I generally endorse aiming directly for the thing you actually care about. It seems higher integrity, and usually more efficient. I want to do the most good possible, and this goal already has a name and community attached to it; EA.
  • I find the core, underlying principles very compelling. The Centre for Effective Altruism highlights scope sensitivity, impartiality, recognition of tradeoffs, and the Scout Mindset. I endorse all of these!
  • Seems to me that EA has a good track record of important insights on otherwise neglected topics. Existential risk, risks of astronomical suffering, AI safety, wild animal suffering; I attribute a lot of success in these nascent fields to the insights of people with a shared commitment to EA principles and goals.
  • Of course, there’s been a lot of progress on slightly less neglected cause areas too. The mind boggles at the sheer number of human lives saved and the vast amount of animal suffering reduced by organisations funded by Open Philanthropy, for example.
  • I have personally benefited massively in achieving my own goals. Beyond some of the above insights, I attribute many improvements in my productivity and epistemics to discussions and recommendations that arose out of the pursuit of EA.
  • In other roles or projects I’m considering, when I think of questions like “who will actually realistically consider acting on this idea I think is great? Giving up their time or money to make this happen?” the most obvious and easiest answer often looks like some subset of the EA community. Obviously there are some echo chamber-y and bias-related reasons that might feed into this, but I think there are some real and powerful ones too.

 

Written quickly (15-20 mins), not neatly/well (originally to post on LinkedIn rather than here). There are better takes on this topic (e.g.).

There are some pragmatic, career-focused reasons too of course. I’m better networked inside EA than outside of it. I have long thought grantmaking is a career direction I’d like to try my hand at, and this seemed like a good specific opportunity for me.

Further caveats I didn't have space to make on LinkedIn: I wrote this quick take as an individual, not for EAIF or my other projects etc; I haven't checked this with colleagues. There are also identity-related and bias reasons that draw me to stay involved with EA. Seems clear that EA has had a lot of negative impact too. And of course we have deep empirical and moral uncertainty about what's actually good and useful in the long-run after accounting for indirect effects. I haven't attempted any sort of overall quantitative analysis of the overall effects.

But in any case, I still expect that overall EA has been and will be a positive force for good. And I’m excited to be contributing to EAIF’s mission. I just wrote a post about Ideas EAIF is excited to receive applications for; please consider checking that out if any of this resonates and/or you have ideas about how to improve EA and the impact of projects making use of EA principles!

You highlight a couple of downsides. Far from all of the downsides of course, but none of the advantages either.

I feel a bit sad to read this since I've worked on something related[1] to what you post about for years myself. And a bit confused why you posted this; do you think that you think EAs are underrating these two downsides? (If not, it just feels a bit unnecessarily disparaging to people trying their best to do good in the world.)

Appreciate you highlighting your personal experience though; that's a useful anecdote.

 

  1. ^

    "Targeting of really young people" is certainly not the framing I would use; there's genuine demand for the services that we offer, as demonstrated by the tens of thousands of applications received across Leaf, Non-Trivial, Atlas, Pivotal, and SPARC/ESPR. But it's of course accurate in the sense that our target audience consists of (subsets of) young people.

Another consideration I just encountered in a grantmaking decision:

Other decision-makers in EA might be those views we are most inclined to defer to or cooperate with. So upon noticing that an opportunity is underfunded in EA specifically but not the world at large, arguably I should update away from wanting to fund it upon considering Open Phil and EA donations specifically as opposed to donations in the world more broadly. Whereas I think the thrust of your post implies the opposite.

(@Ariel Simnegar 🔸, although again no need to reply. Possibly I'm getting in an unnecessary tangle by considering this 'EA spending vs world spending' lens) 

<<Would AIM produce more than one times our current impact at four times our budget? Sure, almost definitely, but it would be way less than four times the impact, and I think this is true for many organizations. Thus AIM made a deliberate call to stay at a smaller level of scale than the highest amounts offered to us.>>

Couldn't you just take (some of) the funding and regrant?

(This could be discretionary rather than through application processes and presumably wouldn't take you much time, given that you have lots of informed views about different orgs' needs and potential anyway. E.g. could mostly be to incubatees. Although I imagine maybe there are some dynamics you'd prefer to avoid there in terms of your relationship with them.)

Yeah it might be more tractable.

Focusing solely in EAs has a bunch of weird effects though.

E.g. I've been thinking about some 'safeguarding democracy' type interventions for longtermist reasons. If I looked at EA funding I'd presumably conclude that the area was massively underfunded -- almost no one working on this. Whereas looking in a global sense the initial impression is that it's a very large, well-funded area. (Maybe it's still a useful heuristic though because explicitly longtermist funding and effort might focus on quite different subcomponents of the broad topic?)

And another one is just that how liberal you are in your definitions of what's EA or not can make quite a big difference. E.g. plausibly by a factor of 2 in the case of animal advocacy.

(No need to reply, I'm just musing.)

Thanks!

Open Philanthropy instituted a policy of no longer funding community-building grants focused on high-school students or minors. The org I have run for several years, Leaf, is currently funding constrained partly due to this. I don't know of funders who have 'stepped in' so to speak (leads welcome!), although I also work a few hours a week at the EA Infrastructure Fund which is happy to make grants in this area.

I had a related concern that (if this funding model became more widespread) it could lead to overinvestment in more legible/obvious contributions relative to more behind-the-scenes ones. E.g. 80k gets lots of donations but Effective Ventures doesn't; a local group gets donations but the orgs who created the resources they reused and adapted (BlueDot, CEA, etc) doesn't.

(I still think on net I agree that it'd be cool to shift somewhat in this direction though. And I think more community building services should consider charging or making donations an opt-out rather than opt-in default.)

I initially found myself nodding in agreement but then I realised a confusion I have:

Why should a donor/grantmaker limit their consideration of what is most underfunded to the EA community?

After all, the EA community is a nebulous community with porous boundaries. E.g. we count Open Phil, but what about The Navigation Fund? Bill and Melinda Gates Foundation? And even if we can define the boundaries, what do we actually gain by focusing on this specific subset of donors?

If you instead focus on 'what is most underfunded at the global level' then the question returns to the same broad question of cause prioritisation ("your value system's most preferred causes").

Yeah I agree in principle it "might be for good reason", though I still have some sense that it seems desirable to reduce overdependence on your ratings for one or two criteria. Similar to the reasoning for sequence thinking vs. cluster thinking

Load more