(Thanks to Beth Barnes, Asya Bergal, and especially Joshua Teperowski-Monrad for comments. A lot of the ideas in this post originated in friends of mine who didn’t want to write them up; they deserve most of the credit for good ideas, I just organized the ideas and wrote it up.)
80000 Hours writes (under the heading “Apply an unusual strength to a needed niche”):
If there’s any option in which you might excel, it’s usually worth considering, both for the potential impact and especially for the career capital; excellence in one field can often give you opportunities in others.
This is even more likely if you’re part of a community that’s coordinating or working in a small field. Communities tend to need a small number of experts covering each of their main bases.
For instance, anthropology isn’t the field we’d most often recommend someone learn, but it turned out that during the Ebola crisis, anthropologists played a vital role, since they understood how burial practices might affect transmission and how to change them. So, the biorisk community needs at least a few people with anthropology expertise.
I think that there are many people who do lots of good through pursuing a career that they were a particularly good fit for, rather than by trying to fit themselves into a top-rated EA career. But I also think it’s pretty easy to pursue such paths in a way that isn’t very useful. In this post I’m going to try to build on this advice to describe some features of how I think these nonstandard careers should be pursued in order to maximize impact.
I’m going to misleadingly use the term “nonstandard EA career” to mean “a career that isn’t one of 80K’s top suggestions”. (I’m going to abbreviate 80,000 Hours as 80K.)
I’m not very confident in my advice here, but even if the advice is bad, hopefully the concepts and examples are thought provoking.
Doing unusual amounts of good requires unusual actions
If you want to do an unusual amount of good, you probably need to take some unusual actions. (This isn’t definitionally true, but I think most EAs should agree on it–at the very least, most EAs think you can do much more good than most people do by donating an affordable but unusual share of your income to GiveWell recommended nonprofits.)
One approach to this is working in a highly leveraged job on a highly leveraged problem. This is the approach suggested by the 80K career guide. They came up with a list of career options, like doing AI safety technical research, or working at the CDC on biosecurity, or working at various EA orgs, which they think are particularly impactful and which they think have room for a bunch of EAs.
Another classic choice is donating to unusually effective nonprofits, which is a plan where you didn’t have to choose a particularly specific career path (though taking a specific career path is extremely helpful), the unusual effectiveness comes from the choice to donate to an unusually effective place.
The nice thing about taking one of those paths is that you might not need to do anything else unusual in order to have a lot of impact.
There are also some reasons to consider doing something that isn’t EtG or an 80K recommendation. For example:
- There might be lower hanging fruit in that field, because it doesn’t have as many EAs in it.
- Comparative advantage: The thing that you’re best at probably isn’t a top recommended EA career, just on priors.
- You can’t get a job in one of the top recommended EA careers, maybe because you’re not in 80K’s target audience of graduates aged 20-35 who have lots of career options. I care about people in this group, but I’m not going to address this situation much in the rest of this piece, because the relevant considerations are somewhat different than the ones I wanted to write about here.
But if you go into a career that wasn’t as carefully selected for impact, then you’re going to have to do something unusual in order to be highly impactful. Phrased differently: if you want to do as much good as the 50th percentile operations staff at an EA org, I suspect you’re going to have to do more good than the 90th percentile anthropologist, and probably more than the 99th percentile.
(I got worried that someone was going to comment on this post with incontrovertible proof that anthropologists actually do lots of great work, so I spent a wholly unreasonable amount of time investigating this. Some notes on that: The American Anthropological Association has a page about Careers in Anthropology here which you can take a look at. It doesn’t mention any careers which look as good to me as preventing pandemics. The AAA did a survey of anthropologists which thought that 12% of respondents with anthropology master’s degrees worked in humanitarian work, health, or international development. When I click around on anthropology career websites they don’t mention this kind of work very much. On the other hand, this paper says “The number of local anthropologists engaged as partners in malaria control efforts was impressive, although largely not published in medical anthropological journals. It is important to remember that, at this time, from the perspective of the academic field of medical anthropology, applied research for health programs as clients was considered research of low value.”, which implies otherwise. Also, when I skimmed most of an anthropology textbook recently, it didn’t seem to put emphasis on the possibility for anthropologists to do useful altruistic work.)
When I think about how people can do unusual amounts of good through a career path which doesn’t have good average impact outcomes, a few particular strategies stand out to me, including:
- Being unusually flexible, for example becoming a good web developer then working on small and promising EA projects doing somewhat generalisty work.
- Doing good by becoming an expert on a subject which EA can get value from and figuring out how to learn about the parts of the subject which will be most helpful for EAs, and then applying yourself to those questions. Eg being a historian who researches the history and dynamics of social movements.
- Doing good by careful targeting within a field. Eg you’re a medical researcher and then do projects that wouldn’t have happened otherwise, either because you have unusually good judgement about the future (eg you five years ago realized that AI might be a big deal soon) or because you have unusual priorities (eg being interested in life extension or preventing x-risk)
- As the quote from 80K at the top of this post points out, if you do really well in a field this can provide career capital for work in other fields.
“Good judgement”
I think there’s an important distinction to be drawn between nonstandard EA career plans which require you to have great epistemics and judgement and career paths which don’t. But to describe that distinction, I first need to explain what I mean by “good judgement”.
When I say someone has good judgement, I mean that I think they’re good at the following things:
- Spotting the important questions. When they start thinking about a topic (How good is leaflet distribution as an intervention to reduce animal suffering? How should we go about reducing animal suffering? How worried should we be about AI x-risk? Should we fund this project?), they come up with key considerations and realize what they need to learn more about in order to come to a good decision.
- Having good research intuitions. They are good at making quick guesses for answers to questions they care about. They think critically about evidence that they are being presented with, and spot ways that it’s misleading.
- Having good sense about how the world works and what plans are likely to work. They make good guesses about what people will do, what organizations will do, how the world will change over time. They have good common sense about plans they’re considering executing on; they rarely make choices which seem absurdly foolish in retrospect.
- Knowing when they’re out of their depth, knowing who to ask for help, knowing who to trust.
These skills allow people to do things like the following:
- Figure out cause prioritization
- Figure out if they should hire someone to work on something
- Spot which topics are going to be valuable to the world for them to research
- Make plans based on their predictions for how the world will look in five years
- Spot underexplored topics
- Spot mistakes that are being made by people in their community; spot subtle holes in widely-believed arguments
I think it’s likely that there exist things you can read and do which make you better at having good judgement about what’s important in a field and strategically pursuing high impact opportunities within it. I suspect that other people have better ideas, but here are some guesses. (As I said, I don’t think that I’m overall great at this, though I think I’m good at some subset of this skill.)
- Being generally knowledgeable seems helpful.
- Learning history of science (or other fields which have a clear notion of progress) seems good. I’ve heard people recommend reading contemporaneous accounts of scientific advancements, so that you learn more about what it’s like to be in the middle of shifts.
- Perhaps this is way too specific, but I have been trying to come up with a general picture of how science advances by talking to scientists about how their field has progressed over the last five years and how they expect it to progress in the next five. For example, maybe the field is changing because computers are cheaper now, or because we can make more finely tuned lasers or smaller cameras, or because we can more cheaply manufacture something. I think that doing this has given me a somewhat clearer picture of how science develops, and what the limiting factors tend to be.
- I think that you can improve your skills at this by working with people who are good at it. To choose some arbitrary people, I’m very impressed by the judgement of some people at Open Phil, MIRI, and OpenAI, and I think I’ve become stronger from working with them.
- The Less Wrong sequences try to teach this kind of judgement; many highly-respected EAs say that the Sequences were very helpful for them, so I think it’s worth trying them out. I found them very helpful. (Inconveniently, many people whose judgement that I’m less impressed with are also big fans of the Sequences. And many smart EAs find them offputting or unhelpful.)
Doing good in a way that requires self-direction and good judgement
In this kind of career path, you try to produce value by gaining some expertise that EA doesn’t have, and then doing good work that otherwise wouldn’t be done by people in that field.
EA has a lot of questions which an excellent historian could answer. I think EAs have done useful work researching (among other things) the history of long-range forecasting, early field growth, and nuclear policy. And it would be great to have more similar work.
However, to do the kind of work that I’d be excited about seeing from historians, you’d need to have skills and intellectual intuitions that are quite different from those usually expected in a historian. For example, I want people with a skeptical mindset, who take quantitative and big-picture approaches when those approaches are reasonable, who have patience for careful fact checking, and who have a good sense of what might be useful to EA for them to study. Some historians have these properties, but I think they’re not the main values that history as an academic discipline tries to instill in people who are being trained up by PhD programs.
And so I don’t think that EA will get the kind of historians it could use by finding people who already want to be historians and who fit in great with historians and telling them to go learn from the historians how to do history. This is tricky, because I think that that might be the majority of people who are tempted by that section of the 80K website to stay studying history.
I think that this mismatch is closely related to the fact that we want EA historians at all. If you didn’t have to be an unusual historian to maximize your EA impact, then EA could just hire normal historians to do whatever history work it needed done. (I suspect that even though biosecurity needs some anthropologists, no EAs motivated by biosecurity should go into anthropology.)
You can break down the kinds of helpful unusualness into two broad categories, which correspond to reasons that we can’t just hire historians:
- Unusual methods. With historians, I want people who are relatively empirical, unattached to the standard narratives of their field, focused on big-picture, and economically literate. When EAs try to hire people to do EA work, it’s often hard to get them to do work in the style that we’re interested in; this is one reason you might need to make an EA have the relevant skillset.
- Unusual focus. It might be hard for someone outside the field to correctly determine which topics in history are most important for EA to have a better sense of, or to notice that there’s some emerging type of historical research which could be applied to an important question.
For both of these categories, you’re relying on the quality of your own judgement more than you would be if you were following a more standard career path, because you’re more on your own: very few EAs will know which methods it would be really helpful for an EA historian to pursue, or which methods of historical research are effective at answering a particular type of historical question.
I think the quality of judgement required here is pretty high, and that you should consider trying to figure out how good your judgement is (or how good it could become) before you start going down career paths which won’t be helpful if you don’t have good judgement. I don’t think my judgement is good enough that I’d be comfortable trying to do good by going into a field where I wasn’t able to get good advice from EAs with better judgement than me; I feel like if I tried really hard to improve my quality of judgement, I might get good at it but I probably wouldn’t. (That said, I think it’s worth my time to practice having good judgement.)
This makes me think that for a lot of nonstandard EA career paths like this, the required level of commitment and context on EA is higher than it would be if you’re doing engineering at OpenAI or something like that. And I think that if you don’t have that commitment and context, you’re likely to fail to have much impact.
Another example
I recently talked to an EA who’s been working as a software engineer at a respectable tech company for the last two years. They were considering taking a job where they’d be developing technology which would make it more secure to use cloud computing for sensitive computations.
I think that novel mechanisms for secure computation might be helpful for AI x-risk and maybe other things. I’d like it if there was someone who was paying close attention to developments in this field, and trying to figure out which developments are important, and making friends with various experts in different parts of the field.
However, I could also imagine this person going into that field and it not being helpful at all. I imagine them getting really focused on the subproblem that they happen to be employed working on, or getting a promotion that ends up with them getting expertise in something other than the technical questions that are most helpful.
Overall I said I thought that taking the job was a good idea.
Doing good, web developer style
There are lots of EAs who do a lot of good by pursuing jobs which don’t require these difficult judgement calls; in these cases, they try to do good by pursuing a career that they have good personal fit for and which does good via them working for someone else. For example: web developers, operations staff, management, marketing, communications.
These jobs are sometimes hard to fill. For example: somehow, Ought hasn’t yet hired an engineering team lead, despite the fact that there are many full stack engineers in EA, Ought is working on a popular choice for top cause area, and they have strong endorsements from respected members of the EA community. I’ve seen a few other occasions where promising EA projects were bottlenecked on web development effort, for example CEA a few years ago and more recently LessWrong and OpenAI.
I think that a reasonable strategy for impact would be to say “I’m a web developer, so I’m going to become an excellent god damn web developer and then eventually someone’s going to hire me and I’m going to do an amazing job for them”.
AFAICT, if you want impact via this kind of strategy, you need to do a few things right:
- Make it easy for EA orgs to hire you. A classic failure mode is that there’s some EA who would be a great fit for a job but they refuse to take the job. To do this, it’s helpful to be willing to quit your goddamn job, and to set up your life such that you’re able to, eg by having some runway. Also, if a plausibly really high impact org tries to hire you for a role that they think is really important but which you don’t initially want to take, I think it’s really helpful to approach the situation as a pair investigation of the question of whether there’s some version of the job that you’re willing to do. I think it’s a real shame when EAs kind of drag their feet on opportunities; I think it’s more likely to lead to good outcomes if people try to openly say “I don’t want to join your thing because of X and Y, do you think I’m missing something?”
- Try to learn skills that will make you more attractive to a wide variety of EA projects. Within web development, being an expert at using Google-internal tools to scale up distributed web applications is pretty unhelpful; knowing how to quickly build web apps with support for analytics is pretty helpful. As another example, 80K suggests that if you’re in marketing you should focus on digital and data-driven marketing.
- Try to be a good fit for roles that are a bit weird or generalisty, and be willing to take jobs that aren’t entirely web dev.
I think that if you’re not willing/able to join pretty speculative or early stage roles that aren’t well defined yet, you’re missing out on like 75% of the expected impact. This is fine, but I don’t want people to do it by accident.
We can also analyse this EA career plan through the lens of why we can’t hire non-EAs to do the stuff:
- Non-EAs are often unwilling to join weird small orgs, because they’re very risk-averse and don’t want to join a project that might fold after four months.
- Non-EAs aren’t as willing to do random generalist tasks.
- It’s easier to trust EAs than non-EAs to do a task when it’s hard to supervise them, because the EAs might be more intrinsically motivated by the task.
- Non-EAs might not have as much context on some particular EA ideas that are relevant. They might also be disruptive to various aspects of the EA culture that the org wants to preserve. For example, in my experience EA orgs often have a culture that involves people being pretty transparent with their managers about the weaknesses in the work they’ve done, and hiring people who have less of that attitude can screw up the whole culture.
So I think that you can do good via paths like this, but again, it’s not exactly an easy option–to do it well, it’s helpful to be somewhat strategic and it’s important to have a level of flexibility which is empirically unusual. I think people following paths like this might be able to substantially increase their impact by specifically thinking about how they can increase the probability that they are hired to do useful direct work someday.
Conclusion
If you want to do an unusual amount of good, you obviously have to be doing something unusual. If you do a standard top-recommended EA career, you might not need to do anything more unusual than that to get impressive impact results.
I don’t have solid evidence for this, but I am kind of worried that people might make a mistake where they go from “It’s possible to do lots of good in nonstandard careers” and “The thing I currently want to do involves doing a nonstandard career” to “It’s possible to do lots of good by doing the thing I currently want to do”. My guess is that if you want to do lots of good in a nonstandard EA career, you need to do something nonstandard within that career.
Related posts:
- Information security careers for GCR reduction: this post is a specific example of some of my general points here
- On the concept of talent constrained organizations: in a world where your top causes are “talent constrained”, it seems worth thinking carefully about exactly what that means.
- The career and the community, a post by Richard Ngo with similar themes
Thanks for this post. I think discussions about career prioritisation often become quite emotional and personal in a way that clouds people's judgements. Sometimes I think I've observed the following dynamic.
1. It's argued, more or less explicitly, that EAs should switch career into one of a small number of causes.
2. Some EAs are either not attracted to those careers, or are (or at least believe that they are) unable to successfully pursue those careers.
3. The preceding point means that there is a painful tension between the desire to do the most good, and one's personal career prospects. There is a strong desire to resolve that tension.
4. That gives strong incentives to engage in motivated reasoning: to arrive at the conclusion that actually, this tension is illusory; one doesn't need to engage in tough trade-offs to do the most good. One can stay on doing roughly what one currently does.
5. The EAs who believe in point 1 - that EAs should switch career to other causes - are often unwilling to criticise the reasoning described in 4. That's because these issues are rather emotional and personal, and that some may think it's insensitive to criticise people's personal career choices.
I think similar dynamics play out with regards to cause prioritisation more generally, decisions whether to fund specific projects which many feel strongly about, and so on. The key aspects of these dynamics are 1) that people often are quite emotional about their choice, and therefore reluctant to give up on it even in the face of better evidence and 2) that others are reluctant to engage in serious criticism of the former group, precisely because the issue is so clearly emotional and personal to them.
One way to mitigate these problems and to improve the level of debate on these issues is to discuss the object-level considerations in a detached, unemotional way (e.g. obviously without snark); and to do so in some detail. That's precisely what this post does.
5. also has a negative impact on the people who are trying to decide between different career options and would actually be happy to hear constructive criticism. I often feel like I cannot trust others to be honest in their feedback if I'm deciding between career options because they prefer to be 'nice'.
Can I add the importance of patience and trust/faith here?
I think a lot of non-standard career paths involve doing a lot of standard stuff to build skill and reputation, while maintaining a connection with EA ideas and values and keeping an eye open for unusual opportunities. It may be 10 or 20 years before someone transitions into an impactful position, but I see a lot of people disengaging from the community after 2-3 years if they haven't gotten into an impactful position yet.
Furthermore, trusting that one's commitment to EA and self-improvement is strong enough to lead to an impactful career 10 years down the line can create a self-fulfilling prophecy where one views their career path as "on the way to impact" rather than "failing to get an EA job". (I'm not saying it's easy to build, maintain, and trust one's commitment though.)
In addition, I think having good language is really important for keeping these people motivated and involved. We have "building career capital" and Tara MacAulay's term of "Journeymen" but these are not catchy enough I'm afraid.
This might be just restrating what you wrote, but regarding learning unusual and valuabe skills outside of standard EA career paths:
I believe there is a large difference in the context of learning a skill. Two 90th-percentile quality historians with the same training would come away with very different usefulness for EA topics if one learned the skills keeping EA topics in mind, while the other only started thinking about EA topics after their training. There is something about immediately relating and applying skills and knowledge to real topics that creates more tailored skills and produces useful insights during the whole process, which cannot be recreated by combining EA ideas with the content knowledge/skills at the end of the learning process. I think this relates to something Owen Cotton-Barratt said somewhere, but I can't find where. As far as I recall, his point was that 'doing work that actually makes an impact' is a skill that needs to be trained, and you can't just first get general skills and then decide to make an impact.
Personally, even though I did a master's degree in Strategic Innovation Management with longtermism ideas in mind, I didn't have enough context and engagement with ideas on emerging technology to apply the things I learned to EA topics. In addition, I didn't have the freedom to apply the skills. Besides the thesis, all grades were based on either group assignments or exams. So some degree of freedom is also an important aspect to look for in non-standard careers.
Owen speaks about that in his 80k interview.
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement:
I found this post very useful to think about my own career, thanks for writing it up. My prospects also don't fall neatly into the top recommended paths, so I'd be interested in more discussion how to train my "good judgement".
Summarizing your ingredients of good judgment:
What do you think about participating in a forecasting platform, e.g. Good Judgement Open or Metaculus? It seems to cover all ingredients, and even be a good signal for others to evaluate your judgement quality. When I participated in GJO for a couple of months, I was demotivated by the lack feedback for the reasoning in my forecasts. I only could look at the reasoning of other forecasters and at my Brier score, of course.
P.S: Your thinking appears to be very clear and you appear rather competent, so I wonder if your bar of "good enough judgement" to reasonably pursue non-standard paths is too high. I also wonder if people whose judgement you trust would agree with your diagnosis that you wouldn't have good enough judgement for a non-standard path.
Seems pretty good for predicting things about the world that get resolved on short timescales. Sadly it seems less helpful for practicing judgement about things like the following:
Re my own judgement: I appreciate your confidence in me. I spend a lot of time talking to people who have IMO better judgement than me; most of the things I say in this post (and a reasonable chunk of things I say other places) are my rephrasings of their ideas. I think that people whose judgement I trust would agree with my assessment of my judgement quality as "good in some ways" (this was the assessment of one person I asked about this in response to your comment).
Tangential point of information: Cliometrics and cliodynamics are quantitative, big-picture approaches to history. (Unfortunately, the books/articles I've seen have actually been disappointing. If anyone has reading recommendations, I'd be very enthused.)
I have found the handbook of cliometrics pretty useful: https://link.springer.com/referencework/10.1007%2F978-3-642-40458-0
Thanks. I think I initially passed over this because I tend to prefer textbooks or other non-handbook books as a first introduction, but I'll give it a second look.
The worry section made me giggle and I really appreciated it, and felt a kinship in undertaking such a process when I've written things before :)