Jamie is a Program Associate at Polaris Ventures, doing grantmaking to support projects and people aiming to build a future guided by wisdom and compassion for all. Polaris' focus areas include AI governance, digital sentience, plus reducing risks from fanatical ideologies and malevolent actors.
He also spend a few hours a week as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
Lastly, Jamie is President of Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
Hi Daniel! I don't have a lot to elaborate on here; I haven't really thought much about the practicalities, I was just flagging that proposals and ideas relating to regranting seem like a plausible way to help with funding diversification.
Also, just FYI, on the specific intervention idea, which could be promising, that would fall in the remit of EA Funds' Animal Welfare Fund (which I do not work at), not the Infrastructure Fund (which I work at). I didn't check with fund managers there if they endorse things I've written here or not.
Based on this information alone, EAIF would likely prefer an application later (e.g. if there is some event affecting the uncertainty that would pass) to avoid us wasting our time.
But I don't think this would particularly affect your chances of application success. And maybe there are good reasons to want to apply sooner?
And I wouldn't leave it too long anyway, since sometimes apps take e.g. 2 months to be approved. Usually less, and very occasionally more.
I think fairly standard EA retreats / fellowships are quite good at this
Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: 'here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.' (That's basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much 'cause prio is really important. Let's actually try and do that and think carefully about how to do this well, without just deferring to existing people's views.'
So there's a direct ^ version like that that I'd be excited about.
Although perhaps contradictorily I'm also envisaging something even more indirect than the retreats/fellowships you mention as a possibility, where the impact comes through generally developing skills that enable people to be top contributors to EA thinking, top cause areas, etc.
I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Yeah I think this is part of it. But I also think that they help by getting people to think carefully and arrive at sensible and better processes/opinions.
Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to count participants, look at quality, and compare to other programmes we've funded.
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.
(I do like your questions/pushback though; it's making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)
Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.
Yes, very plausible that the example interventions don't really get to the core of the issue -- I didn't spend long creating those and they're more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.
Re "centralized control and disbursion of funds": I agree that my example ideas in the epistemics section wouldn't help with this much. Would the "funding diversification" suggestions below help here?
And I'm intrigued if you're up for elaborating why you don't think the sorts of "What could be done?" suggestions would help with the other two problems you highlight. (They're not optimising for addressing those two specific concerns of course, but insofar as they all relate back to bad/weird epistemic practices, then things like epistemics training programmes might help?) No worries if you don't want to or don't have time though.
Thanks again!
I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.
EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:
Written quickly (15-20 mins), not neatly/well (originally to post on LinkedIn rather than here). There are better takes on this topic (e.g.).
There are some pragmatic, career-focused reasons too of course. I’m better networked inside EA than outside of it. I have long thought grantmaking is a career direction I’d like to try my hand at, and this seemed like a good specific opportunity for me.
Further caveats I didn't have space to make on LinkedIn: I wrote this quick take as an individual, not for EAIF or my other projects etc; I haven't checked this with colleagues. There are also identity-related and bias reasons that draw me to stay involved with EA. Seems clear that EA has had a lot of negative impact too. And of course we have deep empirical and moral uncertainty about what's actually good and useful in the long-run after accounting for indirect effects. I haven't attempted any sort of overall quantitative analysis of the overall effects.
But in any case, I still expect that overall EA has been and will be a positive force for good. And I’m excited to be contributing to EAIF’s mission. I just wrote a post about Ideas EAIF is excited to receive applications for; please consider checking that out if any of this resonates and/or you have ideas about how to improve EA and the impact of projects making use of EA principles!
You highlight a couple of downsides. Far from all of the downsides of course, but none of the advantages either.
I feel a bit sad to read this since I've worked on something related[1] to what you post about for years myself. And a bit confused why you posted this; do you think that you think EAs are underrating these two downsides? (If not, it just feels a bit unnecessarily disparaging to people trying their best to do good in the world.)
Appreciate you highlighting your personal experience though; that's a useful anecdote.
"Targeting of really young people" is certainly not the framing I would use; there's genuine demand for the services that we offer, as demonstrated by the tens of thousands of applications received across Leaf, Non-Trivial, Atlas, Pivotal, and SPARC/ESPR. But it's of course accurate in the sense that our target audience consists of (subsets of) young people.
Another consideration I just encountered in a grantmaking decision:
Other decision-makers in EA might be those views we are most inclined to defer to or cooperate with. So upon noticing that an opportunity is underfunded in EA specifically but not the world at large, arguably I should update away from wanting to fund it upon considering Open Phil and EA donations specifically as opposed to donations in the world more broadly. Whereas I think the thrust of your post implies the opposite.
(@Ariel Simnegar 🔸, although again no need to reply. Possibly I'm getting in an unnecessary tangle by considering this 'EA spending vs world spending' lens)
I didn't write that wording originally (I just copied it over from this post), so I can't speak exactly to their original thinking.
But I think the phrasing includes the EA community, it just uses the plural to avoid excluding others.
Some examples that jump to mind:
I think this suggests more of a sense of unity/agreement than I expect is true in practice. These are complex things and individuals have different views and ideas!
Thanks for thinking this stuff through and coming up with ideas!