Jamie works on grantmaking and research at Polaris Ventures. We support projects and people aiming to build a future guided by wisdom and compassion for all. We're the main grantmaker focused on reducing risks of astronomical suffering in the long-term future (s-risks). Our focus areas include AI governance, digital sentience, plus reducing risks from fanatical ideologies and malevolent actors.
He also spend a few hours a week as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
Lastly, Jamie is President of Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
Interesting! Could you share one or two examples of the "Templates [that] already exist on-line for this exact type of marketplace"?
Since you've thought about it a bit already, I'd be interested if you have any thoughts on how long something like this would take to setup on technical/operational side to a high standard, excluding time spent "finding enough early adopters".
(Also, I'm guessing this isn't something you're interested in doing yourself?)
Hey James! I've heard the claim a couple of times that EA orgs systematically underinvest in marketing. I was wondering if you are able to share (here or privately) any direct evidence that EA orgs are indeed doing that?
I appreciate the broad point that 'marketing can have benefits' (a crude summary of the takeaway from the "5 key pieces of research here"), but is there evidence that EA orgs are (consistently) spending less time and money on marketing that would be optimal?
E.g. skimming this post I didn't really see evidence for the claim in the first sentence:
By not properly addressing the role of marketing in the effective altruism movement, there is substantial impact being forfeited
A separate but somewhat related question that happens to be relevant to something else I'm thinking about at the moment: do you have a rough take on how much EA meta orgs should be 'willing to pay' for newsletter subscribers, vs how much they are WTP? Again, might be easier to discuss with reference to specific orgs, potentially not publicly on the Forum.
Thanks!
Thanks! That's helpful.
(No need to reply to these, just registering some things that seem surprising to me.)
"Longtermism is dead": I feel quite confused about what the idea is here.
Is it that (1) people no longer find the key claims underlying longtermism compelling? (2) it seems irrelevant to influencing decisions? (3) it seems less likely to be the best messaging strategy for motivating people to take specific actions? (4) something else?
I'm also guessing that this is just a general summary of vibe and attitudes from people you've spoken to, but if there's some evidence you could point to that demonstrates this overall point or any of the subpoints I'd be pretty interested in that.
(Responding to you, but Peter made a similar point.)
Thanks!
Just dumping this here in case it's helpful for someone stumbling back across this: Here's a "Worksheet for choosing the most pressing problem" I made.
Yep, I realise that.
Also feel like a big limitation is that this data comes from asking current orgs. Asking current orgs how many "connectors" they need feels a bit like asking a company how many CEOs they want.
Nonetheless, still an update! E.g. this bit was slightly surprising to me:
Funders of independent researchers we’ve interviewed think that there are plenty of talented applicants, but would prefer more research proposals focused on relatively few existing promising research directions (e.g., Open Phil RFPs, MATS mentors' agendas), rather than a profusion of speculative new agendas. This leads us to believe that they would also prefer that independent researchers be approaching their work from an Iterator mindset, locating plausible contributions they can make within established paradigms, rather than from a Connector mindset, which would privilege time spent developing novel approaches.
Also just copying unedited a related rough note to self I made on my own list of potential entrepreneurial projects (but which I'm very unlikely to ever actually work on myself)
[I'm sharing partly because my "Misc idea" seems like it could help with the problem you highlight.]