In this post I talk about several strong non-epistemic incentives and issues that can influence people to pursue longtermist[1]career paths (and specifically x-risk reduction careers and AI safety[2]) for EA community members.
For what it’s worth, I personally I am sympathetic to longtermism, and to people who want to create more incentives for longtermist careers, because of the high urgency some assign to AI Safety and the fact that longtermism is a relatively new field. I am currently running career support pilots to support early-career longtermists.) However I think it’s important to think carefully about career choices, even when it's difficult. I'm worried that these incentives lead people to feel (unconscious & conscious) pressure to pursue (certain) longtermist career paths even if it may not be the right choice for them. I think it’s good for to be thoughtful about cause prioritization and career choices, especially for people earlier in their careers.
Incentives
Good pay and job security
In general, longtermist careers pay very well compared to standard nonprofit jobs, and early career roles are sometimes competitive with for-profit jobs (<30% salary difference, a with the exception of some technical AI safety roles). Jobs at organisations which receive significant funding (including for-profit orgs) usually attract the best talent because they can offer better pay, job security, a team culture, structure and overall lower risk.
It could be difficult to notice “if either longtermism as a whole or specific spending decisions turned out to be wrong. Research suggests that when a lot of money is on the line, our judgment becomes less clear. It really matters that the judgment of EAs is clear, so having a lot of money on the line should be cause for concern.” … “This is especially problematic given the nature of longtermism, simultaneously the best-funded area of EA and also the area with the most complex philosophy and weakest feedback loops for interventions.”
Funding in an oligopoly
There are currently only a handful of funders giving to longtermist causes.[4] Funders have also actively centralized decision-making in the past (see some reasoning), which creates more pressure to defer to funders’ interests to get funding. I’m concerned that people defer too much to funders’ preferences, and going after less impactful projects as a result.
Money “[crowds] out the effect of other incentives and considerations that would otherwise guide these processes.”[5] Therefore, people are “incentivised to believe whatever will help them get funding” and “particular worldviews will get artificially inflated.”[6]
Within community building, I have heard a handful of first- and second-hand accounts of people feeling like funders are pushing them towards getting more people into longtermism. Many EAIF & CEA community building grantmakers and staff are longtermist, and these organizations have gotten significant funding from OP’s EA longtermism community team historically. Feedback community builders receive is often not very clear; there seems to be confusion around evaluation metrics and a general lack of communication, and when there is feedback it’s limited (especially for those lacking access to core networks and hubs) - these accounts are impressions that we’ve heard, and probably don’t always or fully represent funders’ intentions. (This also exacerbates the role models & founder effects issues, discussed below).
I’ve also often heard a platitude amongst people in all cause areas about the challenges of getting funding in the EA ecosystem - and that it’s impossible to get funding outside of it. To them, it’s not worth the resources it would take (especially when they lack in-house fundraising capacity, which they pretty much all do) to find non-EA funding. I don’t think they’re always (or often) right - but it shows you how people might end up deferring too much to funders.
It’s easier to socially defer
Outside of deferring to funders, it’s also more convenient and easier to defer to what community leaders or other high status individuals think is best. The general culture of EA creates a very high bar for challenging the status quo.
I believe most experienced community members agree there can be too much deference from high-context community members with regards to personal career decisions where they would probably be better off deferring less (such as what causes they should work on, which career paths to take, specific organizations or roles to take). This is partly caused and exacerbated by a lack of high-fidelity advice and contextualization.
If you agree with the status quo, you are challenged less than if you disagree. It’s easier (and sometimes more rewarding) to socially defer and contribute to information cascades. On the flipside - it’s hard to disagree because those disagreements are held up to much more scrutiny than deference. The community can be forgiving when people own up to mistakes or change their mind, but I don’t think it goes far enough.
See more discussion on deference here and here.
High status
The most influential EA funders and meta organizations (and their staff) like Open Philanthropy, 80,000 Hours, and CEA assign the most importance and status to longtermist and AI safety careers, and have been intentionally trying for years to increase the status and prestige of longtermism and AI safety through generous funding, priority mentorship, and high social status.
Most influential introductory content emphasizes longtermism, and many new introductory programs that are being funded focus on longtermism.[7] Many of these new programs (whose participants then join the community) are positioned as elite programs (by providing funding to travel abroad, scholarships etc.). In the past few years, there have been a handful of publicized longtermist projects, some with explicit endorsement from leading meta organizations or leaders.[8] Similar non-longtermist projects haven’t gotten as munich attention, and discourse being overly dominated by longtermist considerations has led to some frustration.[9]
When a lot of money is put into a cause or project, it can give the impression that the impact is necessarily bigger - but I believe this can create an illusion of an efficient market where it doesn’t exist. Spending patterns are more reflective an organization’s grants and willingness to spend money than impact - and it’s likely that when longtermists are estimating the value of their time, they are biased towards overestimating it.[10]
Pursuing longtermist careers can give you access to high-status individuals such as thought leaders and large funders. Longtermist opportunities more frequently enable people to move or travel to hubs or gated (coworking) spaces and events and join important networks.[11] This is partly caused by EA and longtermism in particular being overly reliant on personal connections and vetting constraints.
Role models & founder’s effects
Most community builders today appear to prioritize longtermism[12] and this could create founders’ effects where newcomers would be influenced by them to also prioritize longtermism. Organizers are typically higher status within their communities and can be seen as role models. They make the career paths they choose to pursue more real and concrete and will likely be better informed about options in longtermist spaces, so the information their members get is filtered. This is why I think it’s really important for community builders to help their members make connections with people outside their community and outside EA.
Community builders also create founders’ effects, which are felt more strongly in newer communities. There may not be opportunities or adequate support for new members to do cause prioritization or context-specific global priorities research from scratch if founders have already prioritized longtermist causes.[13]
Availability
Since ~2020, there are many more EA-branded opportunities to upskill in longtermist careers than non-longtermist ones.[14] Comparatively, there are fewer opportunities for non-longtermist causes, and the ones that exist are smaller. This has created an availability bias between longtermist and non-longtermist opportunities, which can nudge people towards pursuing longtermist careers because they are socially acceptable, desirable and easier to access.
The availability bias can even push people towards less impactful roles even within longtermist career paths. There are many good or even excellent upskilling opportunities for biosecurity and technical AI alignment outside of the EA movement, but members of the EA community tend to neglect these in favor of EA-branded courses. Biosecurity is a well-established field outside of EA, and there are many excellent upskilling opportunities outside the movement (e.g. pursuing George Mason Global Biodefense Masters, joining professional societies like ABSA, engaging with the UN and WHO)[15]. There are also many opportunities for general ML research & engineering upskilling (such as Masters or PhD programs, research assistant positions, and applied ML positions that don’t directly contribute to capabilities).
I’ve observed (from conversations with both senior and junior professionals in these fields) that people are less likely to seriously consider and pursue these kinds of external opportunities.
Support
There is relatively more support within the EA movement for longtermist careers compared to non-longtermist careers (this does not mean I think it is sufficient). There is more personal funding support[16] and infrastructure support for new projects,[17] several dedicated X-risk/AIS groups[18] which can provide more targeted advice, resources and guidance for those pursuing longtermist career paths, and several dedicated longtermist coworking spaces.
The one exception to this trend is the availability of mentorship and guidance –- especially in AI safety. This is relatively easier to find for non-longtermist causes since these fields are more established.
Conclusion & Suggestions
Based on my observations, I am pretty concerned about the current incentive structure, and how it’s affecting people’s decisions, career choices, and the overall culture of the EA community. I’d be keen to hear pushback on these observations and counter-examples of instances where folks think the current incentive structure works well or seems worth the potential costs.
A short and incomplete list of suggestions:
- Awareness as the first step –- talking openly about the incentives. This post is aiming to do that.
- More programs that encourage people to think carefully about their careers and give them support & accountability to act on those decisions (what we’re piloting over the next 4-6 months).
- More transparency and clarity around how community building funding decisions are made (especially to community members - my funding flows post was one attempt to do this).
- I’ve previously made suggestions for ways to reduce availability bias (encouraging people to be more proactive in their search for opportunities, more high fidelity and personalized advice, more targeted outreach for open positions).
- I like a lot of Linch’s suggestions to combat motivated reasoning, which is very closely related (e.g. encouraging more skepticism amongst newcomers, trying harder to accept newcomers, supporting external critiques and impact assessments, encouraging and reward dissent, bias towards have open discussions on strategy)
Many thanks to Arjun, Dion, ES, Cristina, Angela, Amber, Renan, Sasha, Linda, Adam & Lynette for reviewing early drafts of this post and sharing thoughts.
This post is belatedly part of the September 2023 Career Conversations Week. You can see other Career Conversations Week posts here.
- ^
A few reviewers mentioned that longtermism may be the wrong term to use here, since philosophical longtermism isn’t as cause-area specific. However, I’ve left this in because it is a frequently used term and I’m talking about several careers within this. One might argue that I should use the term “x-risk reduction careers” since I’m primarily talking about AI and biosecurity — and that might be better.
- ^
I estimate that 80-90% of longtermist funding goes towards x-risk reduction, and within that at least 40-70% goes towards AI safety.
- ^
H/T Cristina Schmidt Ibáñez
- ^
A few have open applications (Open Philanthropy, SFF, LTFF), and a slightly larger group that give non-publicly, which is primarily individual donors and advisory organizations (Longview & Effective Giving). There are some funders outside of EA-aligned funding circles, but as far as I have seen very few longtermist organizations are actively seeking (and receiving) significant money from those sources.
- ^
Jan Kulveit on collective epistemic attention and distortion
- ^
William Macaskill, EA and the current funding situation
- ^
For introductory content, see discussion on CEA and 80,000 Hours. Introductory programs include Atlas Fellowship, Nontrivial Pursuits, Future Academy and the Global Challenges Project.
- ^
The Precipice and What We Owe The Future were both promoted by CEA via book giveaways and promotion to EA groups and EAG(x) conferences. The FTX Future Fund also received a lot of attention when it was launched. Comparatively, Peter Singer’s re-releases / updates of The Life You Can Save and Animal Liberation Now have received much less attention.
- ^
A recent post titled EA successes no one cares about and frustrations over a lack of nuance regarding funding gaps represent the general sentiment well.
- ^
H/T Dion Tan. There was some discussion on this in 2022 where (even controlling for the increase of funding) people seemed to overestimate the value of their time by a lot, which could cause them to overvalue the cause as a whole. (See recent discussion on university group salaries).
- ^
For example, the Constellation coworking space in Berkeley where several leaders of longtermist organizations work, or the coordination forum.
- ^
Source: informal conversations with community builders and people who work with them
- ^
H/T Angela Aristizábal. If longtermist causes become divorced from EA writ large, then this could prevent the creation of EA-as-a-question communities which pursue a range of different causes outside existing hubs.
- ^
Examples of career upskilling opportunities run by EA field building orgs are AI safety and biosecurity programs from BlueDot Impact, SERI MATS, ERA, CHERI, the century fellowship, early career funding scholarships & policy fellowships. For nonlongtermist opportunities, BlueDot runs an alternative proteins course, but my understanding is that this is mostly paused. Charity Entrepreneurship has run its incubation program with ~80-100 participants (my estimate) to date (largely founding global health, animal welfare and more recently some meta and biosecurity organizations) since 2019. There are China- & South East Asia-based farmed animal fellowships, and Animal Advocacy Careers has run an online course for the past few years. This is because it’s harder to get funding for non-longtermist field building efforts because the bar for funding is higher and there is less dedicated funding from existing funders (although there are still talent bottlenecks). Many longtermist fields are newer and less mature, and there are less shovel-ready projects for people to do, so it’s a higher priorities for funders to support field building efforts. See more: why we need more nuance regarding funding gaps.
- ^
H/T ES
- ^
In 2022, some Future Fund regrantors (LTFF may have also given some grants), awarded grants for people interested in AI safety to visit the Bay Area, or do career transition grants (there was no equivalent for other causes).
- ^
Examples include cFactual, EV, fiscal sponsors, Good Impressions. There are a few meta orgs in the animal space such as Animal Ask, The Mission Motor and Good Growth.
- ^
- ^
Jan Kulveit on collective epistemic attention and distortion
- ^
The lack of funding diversity (in part because the EA movement is young and new) and active centralization of decision-making in the past (e.g. reasoning) encourages deference to get funding.
- ^
Second, there is a pressure to be right. Not only are people who are smart and have figured things out put on a pedestal (thus making intelligence exceptionally high status), there’s also an opposing pressure not to be wrong (that is, it will harm your reputation to be wrong) - especially on the Forum. The community can be forgiving when people own up to mistakes or change their mind, but I don’t think it goes far enough. If you agree with the status quo, you are challenged less than if you disagree. The bar is much lower - so people can get away with agreeing for the “wrong” reasons, because those beliefs are not held up to as much scrutiny.
There are also big incentive gradients within longtermism:
(Disclosure: I decided to work in biorisk and not AI)
Wow very well put. This is the one that scares me the most out of these three, and I think there could be more exploring to be done as to first, how strong an incentive this might be, and then how that incentive can change people's view on their job and AI
"To work for an AI capabilities company rather than outside (higher salary)"
I know it's a side note not directly related to the original question, but I would be interested to see data comparing
Safety researchers' pdoom who work for AI capabilities companies vs. Those who work for independent safety orgs (this might have been done already)
What proportion of AI safety people who started working for capabilities orgs have moved on over time (I would call it defected) to working more on capabilities than alignment.
These are all great points. I was planning to add this into the main post, but I don't think it ended up in the final draft - so thanks for raising this!
While there are people in the broader biosecurity field doing good work, my impression is this is the exception. There's a ton of work done without a threat model or with what I (and I think most people who thought about it for a bit from an EA perspective) would say is a threat model that neglects the ways the world has been changing and is likely to continue to change. I don't see EAs preferring to join EA biosecurity groups over other groups in the biosecurity field as something that commonly puts them in less impactful roles.