Jamie_Harris

President @ Leaf
2594 karmaJoined Working (6-15 years)London N19, UK

Bio

Participation
5

Jamie works on grantmaking and research at Polaris Ventures. We support projects and people aiming to build a future guided by wisdom and compassion for all. We're the main grantmaker focused on reducing risks of astronomical suffering in the long-term future (s-risks). Our focus areas include AI governance, digital sentience, plus reducing risks from fanatical ideologies and malevolent actors.

He also spend a few hours a week as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

Lastly, Jamie is President of Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
 

Comments
341

Topic contributions
1

Yeah I agree in principle it "might be for good reason", though I still have some sense that it seems desirable to reduce overdependence on your ratings for one or two criteria. Similar to the reasoning for sequence thinking vs. cluster thinking

I sometimes do this, but I wonder if it defeats one of the key benefits of a WFM -- that it accounts for multiple criteria and prevents any single consideration dominating.

(With BOTECs, sometimes the final ranking/conclusion is very dependent on one or two very uncertain or arbitrary criteria.)

Also just copying unedited a related rough note to self I made on my own list of potential entrepreneurial projects (but which I'm very unlikely to ever actually work on myself)

Impact-focused red-teaming, consultancy, and feedback marketplace

Problem I faced as a founder: Often making decisions where I would have loved external input but felt reluctant because I didn't want to ask people for favours, and didn't necessarily know who I could ask other than my personal network.

Solution: a platform like Fiverr or similar where people willing to give feedback and advice are listed with a description of their interests/background, their hourly rate, plus a bunch of tags (e.g. cause area, experience type) so that people can filter by types of role. 

Probably take a % cut of the fees. E.g. Fiverr system: "Service fees are 5.5% of the purchase amount. For purchases under $100, an additional $3.00 small order fee will be applied." Maybe also add some minimum amount, in case people want to list their services for free, e.g. we always charge minimum $1 per hour of service performed. (I think maybe we could get away with charging a lot higher rates than Fiverr though, like 10% or 20%. E.g. if someone is willing to pay $50 for 2 hours of review from someone, they're probably willing to pay $60

Later could potentially expand into a broader freelancing type platform.

Misc idea: crowdsource ratings of how well the job was done by a bunch of different criteria, and encourage ACCURACY in the ratings. Batch the ratings in groups of 5 to help anonymise the source of the rating and thereby reduce embarassment for not giving wholly positive ratings.

Downside: Probably pretty small volumes of money. E.g. a person read-teaming a doc for 2 hours might only charge ~$50, of which we would get ~$5. So we'd need to achieve a really high usage volume for it to become financially self-sustaining and justify the time on the setup.

[I'm sharing partly because my "Misc idea" seems like it could help with the problem you highlight.]

Interesting! Could you share one or two examples of the "Templates [that] already exist on-line for this exact type of marketplace"?

Since you've thought about it a bit already, I'd be interested if you have any thoughts on how long something like this would take to setup on technical/operational side to a high standard, excluding time spent "finding enough early adopters".

(Also, I'm guessing this isn't something you're interested in doing yourself?)

Hey James! I've heard the claim a couple of times that EA orgs systematically underinvest in marketing. I was wondering if you are able to share (here or privately) any direct evidence that EA orgs are indeed doing that?

I appreciate the broad point that 'marketing can have benefits' (a crude summary of the takeaway from the "5 key pieces of research here"), but is there evidence that EA orgs are (consistently) spending less time and money on marketing that would be optimal?

 

E.g. skimming this post I didn't really see evidence for the claim in the first sentence: 

By not properly addressing the role of marketing in the effective altruism movement, there is substantial impact being forfeited

 

A separate but somewhat related question that happens to be relevant to something else I'm thinking about at the moment: do you have a rough take on how much EA meta orgs should be 'willing to pay' for newsletter subscribers, vs how much they are WTP? Again, might be easier to discuss with reference to specific orgs, potentially not publicly on the Forum.

 

Thanks!

Thanks! That's helpful. 

  • Seems to me that at least 80,000 Hours still "bat for longtermism" (E.g. it's very central in their resources about cause prioritisation.)
  • Not sure why you think that no "'EA leader' however defined is going to bat for longtermism any more in the public sphere".
  • Longtermism (or at least, x-risk / GCRs as proxies for long-term impact) seem pretty crucial to various prioritisation decisions within AI and bio? 
  • And longtermism unequivocally seems pretty crucial to s-risk work and justification, although that's a far smaller component of EA than x-risk work.

(No need to reply to these, just registering some things that seem surprising to me.)

"Longtermism is dead": I feel quite confused about what the idea is here.

Is it that (1) people no longer find the key claims underlying longtermism compelling? (2) it seems irrelevant to influencing decisions? (3) it seems less likely to be the best messaging strategy for motivating people to take specific actions? (4) something else?

I'm also guessing that this is just a general summary of vibe and attitudes from people you've spoken to, but if there's some evidence you could point to that demonstrates this overall point or any of the subpoints I'd be pretty interested in that.

(Responding to you, but Peter made a similar point.)

Thanks!

Just dumping this here in case it's helpful for someone stumbling back across this: Here's a "Worksheet for choosing the most pressing problem" I made.

Yep, I realise that. 

Also feel like a big limitation is that this data comes from asking current orgs. Asking current orgs how many "connectors" they need feels a bit like asking a company how many CEOs they want.

Nonetheless, still an update! E.g. this bit was slightly surprising to me:

Funders of independent researchers we’ve interviewed think that there are plenty of talented applicants, but would prefer more research proposals focused on relatively few existing promising research directions (e.g., Open Phil RFPs, MATS mentors' agendas), rather than a profusion of speculative new agendas. This leads us to believe that they would also prefer that independent researchers be approaching their work from an Iterator mindset, locating plausible contributions they can make within established paradigms, rather than from a Connector mindset, which would privilege time spent developing novel approaches.

Load more