Jamie_Harris

Grantmaking @ Polaris Ventures and EA Infrastructure Fund
2781 karmaJoined Working (6-15 years)London N19, UK

Bio

Participation
5

Jamie is a Program Associate at Polaris Ventures, doing grantmaking to support projects and people aiming to build a future guided by wisdom and compassion for all. Polaris' focus areas include AI governance, digital sentience, plus reducing risks from fanatical ideologies and malevolent actors.

He also spend a few hours a week as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

Lastly, Jamie is President of Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
 

Comments
356

Topic contributions
1

I didn't write that wording originally (I just copied it over from this post), so I can't speak exactly to their original thinking.

But I think the phrasing includes the EA community, it just uses the plural to avoid excluding others.

Some examples that jump to mind:

  • EA
  • Rationality, x-risk, s-risk, AI Safety, wild animal welfare, etc to varying degrees
  • Org-specific communities, e.g. the fellows and follow-up opportunities on various fellowship programmes.

 

I would like to more clearly understand what the canonical "stewards of the EA brand" in CEA and the EAIF have in mind for the future of EA groups and the movement as a whole?

I think this suggests more of a sense of unity/agreement than I expect is true in practice. These are complex things and individuals have different views and ideas!

 

Thanks for thinking this stuff through and coming up with ideas!

Hi Daniel! I don't have a lot to elaborate on here; I haven't really thought much about the practicalities, I was just flagging that proposals and ideas relating to regranting seem like a plausible way to help with funding diversification.

Also, just FYI, on the specific intervention idea, which could be promising, that would fall in the remit of EA Funds' Animal Welfare Fund (which I do not work at), not the Infrastructure Fund (which I work at). I didn't check with fund managers there if they endorse things I've written here or not.

Based on this information alone, EAIF would likely prefer an application later (e.g. if there is some event affecting the uncertainty that would pass) to avoid us wasting our time.

But I don't think this would particularly affect your chances of application success. And maybe there are good reasons to want to apply sooner?

And I wouldn't leave it too long anyway, since sometimes apps take e.g. 2 months to be approved. Usually less, and very occasionally more.

I think fairly standard EA retreats / fellowships are quite good at this

Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: 'here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.' (That's basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much 'cause prio is really important. Let's actually try and do that and think carefully about how to do this well, without just deferring to existing people's views.'

So there's a direct ^ version like that that I'd be excited about.

Although perhaps contradictorily I'm also envisaging something even more indirect than the retreats/fellowships you mention as a possibility, where the impact comes through generally developing skills that enable people to be top contributors to EA thinking, top cause areas, etc. 

I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?

Yeah I think this is part of it. But I also think that they help by getting people to think carefully and arrive at sensible and better processes/opinions.

Seems fair. I do work there, I promise this post isn't an elaborate scheme to falsely bulk out my CV.

Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.

But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to count participants, look at quality, and compare to other programmes we've funded.

Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.

I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.

(I do like your questions/pushback though; it's making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)

Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.

Yes, very plausible that the example interventions don't really get to the core of the issue -- I didn't spend long creating those and they're more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.

Re "centralized control and disbursion of funds": I agree that my example ideas in the epistemics section wouldn't help with this much. Would the "funding diversification" suggestions below help here?

And I'm intrigued if you're up for elaborating why you don't think the sorts of "What could be done?" suggestions would help with the other two problems you highlight. (They're not optimising for addressing those two specific concerns of course, but insofar as they all relate back to bad/weird epistemic practices, then things like epistemics training programmes might help?) No worries if you don't want to or don't have time though.

Thanks again!

I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.

EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:

  • I generally endorse aiming directly for the thing you actually care about. It seems higher integrity, and usually more efficient. I want to do the most good possible, and this goal already has a name and community attached to it; EA.
  • I find the core, underlying principles very compelling. The Centre for Effective Altruism highlights scope sensitivity, impartiality, recognition of tradeoffs, and the Scout Mindset. I endorse all of these!
  • Seems to me that EA has a good track record of important insights on otherwise neglected topics. Existential risk, risks of astronomical suffering, AI safety, wild animal suffering; I attribute a lot of success in these nascent fields to the insights of people with a shared commitment to EA principles and goals.
  • Of course, there’s been a lot of progress on slightly less neglected cause areas too. The mind boggles at the sheer number of human lives saved and the vast amount of animal suffering reduced by organisations funded by Open Philanthropy, for example.
  • I have personally benefited massively in achieving my own goals. Beyond some of the above insights, I attribute many improvements in my productivity and epistemics to discussions and recommendations that arose out of the pursuit of EA.
  • In other roles or projects I’m considering, when I think of questions like “who will actually realistically consider acting on this idea I think is great? Giving up their time or money to make this happen?” the most obvious and easiest answer often looks like some subset of the EA community. Obviously there are some echo chamber-y and bias-related reasons that might feed into this, but I think there are some real and powerful ones too.

 

Written quickly (15-20 mins), not neatly/well (originally to post on LinkedIn rather than here). There are better takes on this topic (e.g.).

There are some pragmatic, career-focused reasons too of course. I’m better networked inside EA than outside of it. I have long thought grantmaking is a career direction I’d like to try my hand at, and this seemed like a good specific opportunity for me.

Further caveats I didn't have space to make on LinkedIn: I wrote this quick take as an individual, not for EAIF or my other projects etc; I haven't checked this with colleagues. There are also identity-related and bias reasons that draw me to stay involved with EA. Seems clear that EA has had a lot of negative impact too. And of course we have deep empirical and moral uncertainty about what's actually good and useful in the long-run after accounting for indirect effects. I haven't attempted any sort of overall quantitative analysis of the overall effects.

But in any case, I still expect that overall EA has been and will be a positive force for good. And I’m excited to be contributing to EAIF’s mission. I just wrote a post about Ideas EAIF is excited to receive applications for; please consider checking that out if any of this resonates and/or you have ideas about how to improve EA and the impact of projects making use of EA principles!

You highlight a couple of downsides. Far from all of the downsides of course, but none of the advantages either.

I feel a bit sad to read this since I've worked on something related[1] to what you post about for years myself. And a bit confused why you posted this; do you think that you think EAs are underrating these two downsides? (If not, it just feels a bit unnecessarily disparaging to people trying their best to do good in the world.)

Appreciate you highlighting your personal experience though; that's a useful anecdote.

 

  1. ^

    "Targeting of really young people" is certainly not the framing I would use; there's genuine demand for the services that we offer, as demonstrated by the tens of thousands of applications received across Leaf, Non-Trivial, Atlas, Pivotal, and SPARC/ESPR. But it's of course accurate in the sense that our target audience consists of (subsets of) young people.

Another consideration I just encountered in a grantmaking decision:

Other decision-makers in EA might be those views we are most inclined to defer to or cooperate with. So upon noticing that an opportunity is underfunded in EA specifically but not the world at large, arguably I should update away from wanting to fund it upon considering Open Phil and EA donations specifically as opposed to donations in the world more broadly. Whereas I think the thrust of your post implies the opposite.

(@Ariel Simnegar 🔸, although again no need to reply. Possibly I'm getting in an unnecessary tangle by considering this 'EA spending vs world spending' lens) 

Load more