Hide table of contents

Doing a lot of good has been a major priority in my life for several years now. Unfortunately I made some substantial mistakes which have lowered my expected impact a lot, and I am on a less promising trajectory than I would have expected a few years ago. In the hope that other people can learn from my mistakes, I thought it made sense to write them up here! I will attempt to list the mistakes which lowered my impact most over the past several years in this post and then analyse their causes. Writing this post and previous drafts has also been very personally useful to me, and I can recommend undertaking such an analysis.

Please keep in mind that my analysis of my mistakes is likely at least a bit misguided and incomprehensive.

It would have been nice to condense the post a bit more and structure it better, but having already spent a lot of time on it and wanting to move on to other projects, I thought it would be best not to let the perfect be the enemy of the good!

To put my mistakes into context, I will give a brief outline of what happened in my career-related life in the past several years before discussing what I consider to be my main mistakes.

Background

I came across the EA Community in 2012, a few months before I started university. Before that point my goal had always been to become a researcher. Until early 2017, I did a mathematics degree in Germany and received a couple of scholarships. I did a lot of ‘EA volunteering’ over the years, mostly community building and large-scale grantmaking. I also did two unpaid internships at EA orgs, one during my degree and one after graduating, in summer 2017.

After completing my summer internship, I started to try to find a role at an EA org. I applied to ~7 research and grantmaking roles in 2018. I got to the last stage 4 times, but received no offers. The closest I got was receiving a 3-month-trial offer as a Research Analyst at Open Phil, but it turned out they were unable to provide visas. In 2019, I worked as a Research Assistant for a researcher at an EA aligned university institution on a grant for a few hundred hours. I stopped as there seemed to be no route to a secure position and the role did not seem like a good fit.

In late 2019 I applied for jobs suitable for STEM graduates with no experience. I also stopped doing most of my EA volunteering. In January 2020 I began to work in an entry-level data analyst role in the UK Civil Service which I have been really happy with. In November, after 6.5mon full-time equivalent worked, I received a promotion to a more senior role with management responsibility and a significant pay rise.

First I am going to discuss what I think I did wrong from a first-order practical perspective. Afterwards I will explain which errors in my decision making process I consider the likely culprits for these mistakes - the patterns of behaviour which need to be changed to avoid similar mistakes in the future.

A lot of the following seems pretty silly to me now, and I struggle to imagine how I ever fully bought into the mistakes and systematic errors in my thinking in the first place. But here we go!

What did I get wrong?

  1. I did not build broad career capital nor kept my options open. During my degree, I mostly focused on EA community building efforts as well as making good donation decisions. I made few attempts to build skills for the type of work I was most interested in doing (research) or skills that would be particularly useful for higher earning paths (e.g. programming), especially later on. My only internships were at EA organisations in research roles. I also stopped trying to do well in my degree later on, and stopped my previously-substantial involvement in political work.

  2. In my first year after finishing my degree and post-graduation summer internship, I only applied to ~7 roles, exclusively at EA organisations. That is way too small a number for anyone who actually wants a job!

  3. 1.5 years after graduating, I gave up hoping for any EA org role. I started to apply for ordinary jobs, but then accepted a support role for an EA researcher on a grant after a personal introduction, only working part time. This was despite the fact that there were few outside view signs that this would be a good idea except it being an EA role, and no clear plan how this would result in a real job [or impact].

These mistakes were not created equal - the first and second had a much larger negative impact than the third. The combination of the second and third mistake had the direct impact of me being unemployed or underemployed for over 2 years when I wanted to work. When I finally started a ‘real job’, it had been almost 3 years since I graduated.

Which systematic errors in my decision-making likely caused these mistakes?

While I tried to group my assessment of the underlying causes of my mistakes by theme to make them easier to read, they often tie into each other. I am uncertain in my assessments even now, so please read the below in that light.

I relied too much on the EA community.

When I thought about how I want to do the most good in my life, I prioritised being cooperative and loyal to the EA community over any other concrete goal to have an impact. I think that was wrong, or at least wrong without a very concrete plan backing it up.

I put too much weight on what other people thought I should be doing, and wish I had developed stronger internal beliefs. Because I wanted to cooperate, I considered a nebulous concept of ‘the EA community’ the relevant authority for decision-making. Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to. I did plenty of things just because they were ‘EA’ without actually evaluating how much impact I would be having or how much I would learn.

I thought that my contributions (outreach activities, donations & grantmaking, and general engagement) would ensure that I would get the benefits of being dedicated, like a secure role within the EA structure once it seemed like the EA community was no longer financially constrained. I did not distinguish between ‘professional career development’ and ‘volunteering’, because I viewed everything under the EA community umbrella.

There are many examples of me taking what other EAs said much too seriously, but here are some of the more impactful ones:

When I joined the community, I received plenty of extremely positive feedback. I trusted these statements too much, and wrongly had the impression that I was doing well and would by default be able to do a lot of good in the near future. I also over-weighted occurrences like being invited to more exclusive events. When an organisation leader said to me I could work at their organisation, I interpreted it literally. When other senior EAs asked me to do something specific, I thought I should do as told.

I stopped doing political work (in ~2014/2015) as I had the impression that it was EA consensus that this was not particularly valuable. I now regret this, it might have opened high impact routes later on. The network I had there was great as well, some of the people I used to work with have done very well on the political ladder.

When I received a trial offer from OpenPhil as a Research Analyst in 2018, I thought this would mostly end my job search. Even though I could not do the trial for visa reasons, I thought the offer would make it easy to find a job in the EA sphere elsewhere. This was both based on things Open Phil told me and the very high regard the community seemed to hold this application process and opportunity in. That you could succeed to get to the trial stage but still not be able to find a job in the EA sphere caught me off-guard.

I also focused far too much on working at EA organisations. In 2015, talk about how Effective Altruism was talent-constrained became popular. Up until that point, I had been prepared to aim for earning-to-give later, take on an advocacy role, or go into academia. But at that point I started to take it for granted that I would be able to work at EA orgs after my degree. I did not think enough about how messages can be distorted once they travel through the community and how this message might not have been the one 80,000 Hours had intended. I might have noticed this had I paid more intention to their writing on the topic of talent-constraints and less to the version echoed by the community. Paying more attention to their written advice, I could have noticed the conflict between the talent-constrained message as echoed by the community with the actual 80,000 Hours advice to keep your options open and having Plan A, B and Z.

Similar things can be said about the risks newly started projects could possibly entail. While I think the reasoning e.g. 80,000 Hours brought forth on the topic is sound, again I did not appreciate how messages get distorted and amplified through the community. My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I was too willing to take risks.

Some level of risk-taking is good and necessary, but my deference to the EA community made me blind towards the risks I was taking. I did not think carefully enough about the position I would be in if focusing on EA jobs failed: that of a long-unemployed graduate with no particular skills.

The advice to be willing to take more risks prominent within EA was geared towards ‘talented, well-credentialed graduates without responsibilities’ - whether talented or not, I am not well-credentialed and have dependents. Therefore I should have questioned more how much this advice really applied to me.

I stopped trying to do well in my degree, as good grades seemed unnecessary if I was just going to work at EA organisations later anyway. I thought the time could be much better invested on community building or better donation decisions. I also did not try to do any kind of research effort despite this still being the path I was most interested in.

I put much less effort into developing my broader capabilities and intellectual interests. I did not think about the fact that most of my EA volunteering activities would bring me little career capital. I should have paid more attention to the fact that it would be especially hard for me to enter academia given my grades or other direct work paths which usually require years of up-front investment.

Unfortunately, even once I understood that direct work is not about working at EA orgs, I am not really qualified to start on any of the most-discussed routes without substantial upskilling which in turn is not easily accessible to me.

I underestimated the cost of having too few data points.

This one sounds a bit nebulous, there are a few different aspects I am trying to get at.

Something I struggled with while trying to find a job was making sense of the little information I had. I was endlessly confused why I seemed to have done so well in some contexts, but was still failing to get anywhere. Often I wondered whether there was something seriously wrong with me, as I would guess is often the case for outwardly qualified people who are underemployed regardless. I now think there was nothing to explain here - most people who want a job as much as I did apply to more than one highly competitive job every month or two.

While I knew on some level that a lot of randomness is involved in many processes, including job applications, I still tried to find meaning in the little information I had. Instead, my goal should have been to gather much more data, especially as I got more desperate. To be fair to my past self, I would have been keen to apply to more jobs, but as I was only interested in EA org jobs, there were way too few to apply to.

It was obvious to me that I was missing out on lots of career capital including valuable skills while not working: true almost by definition. But I do not think I appreciated how valuable work is as a calibration exercise. Whenever people talked about ‘figuring out what you are good at’, I didn’t understand why this was so valuable - while there would be some information I would gain, this did not seem that important compared to just getting better at things.

Now I think I mostly misunderstood what people were trying to get at with ‘figuring out what you are good at’. What you are good at is mostly about relative not absolute performance. For me, learning ‘what I am good at’ this year has mostly not looked like discovering I am better or worse at a skill than I thought, but instead discovering how good other people are at the same skills. Particularly useful are comparisons to people who are otherwise similar and might work in a similar profession. I have gotten some positive feedback on traits I thought I was weak on, but was apparently still better than other analysts. I have also found out about some skill axes that I never realised there was any meaningful variance on.

I did not notice my ignorance around how some social structures operate.

I found it really difficult to understand how I was supposed to navigate the ‘professional’ EA community and had a poor model of what other people’s expectations were.

I had no luck applying the advice to ‘talk to other people’ when trying to find a job through informal networks. It did work for people around me, and I still don’t really know why; probably the whole conversation needs to be framed in a very specific way. The couple of times I tried to be more direct I made a botch of it.

I also had the wrong framework when it came to interactions with potential employers, and wider experience with applying to jobs (as well as running more application processes myself) has helped me see that. My understanding of what potential employers would judge is whether I was a generally smart and capable person. This was wrong, a better focus would have been whether I can help them solve their very specific problems. I probably would have approached some interactions with potential employers differently if I had internalized this earlier. I failed to model other people’s preferences in these interactions as separate from my own and did not try hard enough to understand what they are.

I thought having no strong preferences for a role within the EA community would be considered a good thing, as it proved that I was being cooperative. But most employers probably want to hear about how their role fits your particular skills and that you are really excited about it, including within the EA sphere.

I underestimated the cost to my motivation and wellbeing, and how these being harmed could curb my expected impact.

By late 2018, I had been waiting for opportunities for a year and felt pretty bad. At that point, my first priority should have been to get out of that state. When I accepted the research assistant role, I was insufficiently honest with myself about whether I would be able to do well given how burnt out I felt.

As there was no clear path from being a research assistant on a grant into something more secure and well defined, I just stayed in a burnt out state for longer. In autumn 2019 I thought it would be better for me to mentally distance myself from the EA community, which did make me feel a bit better.

I was still often profoundly miserable about my employment situation. The big shift here came after starting my data analyst job in January 2020 and my misery which had reduced me to tears each week for over 2 years was basically cured overnight. While the direction of the change is not surprising, it has been astounding to me how much more productive I have been this year compared to previous years.

Being miserable also hindered my ability to assess my prospects rationally. It took me a long time to properly give up on my EA job prospects: whenever I thought this path might not work out for me at all, the thought was too horrifying to seriously contemplate. Having to start again at zero with my investments having been in vain just seemed too awful. Perhaps this would deserve its own mention as a high level systematic error: When confronted with failure, I had left no line of retreat.

What next?

As mentioned, I have been much, much happier since I started working in the Civil Service, especially now with the promotion. It is really great for me to be in an environment in which I feel encouraged to take as much responsibility as I want and solve problems by myself.

My main goal this year has been to become more enthusiastic and excitable, especially regarding my ability to have an impact, and I am glad to report that this has been going very well! I have also felt much more in control of my future and have been able to be strategic about my goals.

For the near future my main aim in my job is still to gain more skills and get much better calibrated on what my strengths and weaknesses are relative to other people. I also want to get much better calibrated on what might be possible for me in the medium to long term, as I still want to consider options broadly.

I am still in the process of figuring out what my personal beliefs are on where I can have the most impact in addition to the personal fit considerations. This year I have spent a lot of time thinking about how large a role I want doing good to play in my life as well as moral considerations on what I consider doing good to be. Next year I hope to make more progress on my beliefs around cause prioritisation as well as practical considerations on how to do a lot of good. Ironically, mentally distancing myself from the EA sphere a bit is just what I needed to make this a plausible goal.

A critical assessment of what I have written here is very welcome! Please point it out if you think I forgot some mistakes or misanalysed them.

Special thanks to AGB, Richard Ngo, Max Daniel and Jonas Vollmer who gave feedback on drafts of this post.

Comments73
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:

  • Moral uncertainty, giving other moral systems weight "because other smart people believe them" rather than because they seem object-level reasonable
  • Lots of emphasis on avoiding accidentally doing harm by being uninformed
  • People bring up "intelligent people disagree with this" as a reason against something rather than going through the object-level arguments

Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it's a recipe for information cascades, groupthink... (read more)

That last paragraph is a good observation, and I don’t think it’s entirely coincidental. 80k has a few instances in their history of accidentally causing harm, which has led them (correctly) to be very conservative about it as an organisation.

The thing is, career advice and PR are two areas 80k is very involved in and which have particular likelihood of causing as much harm as good, due to bad advice or distorted messaging. Most decisions individual EAs make are not like this, and it’s a mistake if they treat 80k’s caution as a reflection of how cautious they should be. Or worse, act even more cautiously reasoning the combined intelligence of the 80k staff is greater than their own (likely true, but likely irrelevant).

I don't think any of 80k's career advice has caused much harm compared to the counterfactual of not having given that advice at all, so I feel a bit confused how to think about this. Even the grossest misrepresentation of EtG being the only way to do good or something still strikes me as better than the current average experience a college graduate has (which is no guidance, and all career advice comes from companies trying to recruit you). 

I think the comparison to "the current average experience a college graduate has" isn't quite fair, because the group of people who see 80k's advice and act on is is already quite selected for lots of traits (e.g. altruism). I would be surprised if the average person influenced by 80k's EtG advice had the average college graduate experience in terms of which careers they consider and hence, where they look for advice, e.g. they might already be more inclined to go into policy, the non-profit sector or research to do good.

(I have no opinion on how your point comes out on the whole. I wasn't around in 2015, but intuitively it would also surprise me if 80k didn't do substantially more good during that time than bad, even bracketing out community building effects (, which, admittedly, is hard))

(Disclaimer: I am OP’s husband)

As it happens, there are a couple of examples in this post where poor or distorted versions of 80k advice arguably caused harm relative to no advice; over-focus on working at EA orgs due to ‘talent constraint’ claims probably set Denise’s entire career back by ~2 years for no gain, and a simplistic understanding of replaceability was significantly responsible for her giving up on political work.

Apart from the direct cost, such events leave a sour taste in people’s mouths and so can cause them to dissociate from the community; if we’re going to focus on ‘recruiting’ people while they are young, anything that increases attrition needs to be considered very carefully and skeptically.

I do agree that in general it’s not that hard to beat ‘no advice’, rather a lot of the need for care comes from simplistic advice’s natural tendency to crowd out nuanced advice.

I don’t mean to bash 80k here; when they become aware of these things they try pretty hard to clean it up, they maintain a public list of mistakes (which includes both of the above), and I think they apply way more thought and imagination to the question of how this kind of thing can happen than most other places, even most other EA orgs. I’ve been impressed by the seriousness with which they take this kind of problem over the years.

5
Habryka
Yeah, totally agree that we can find individual instances where the advice is bad. Just seems pretty unlikely for that average to be worse, even just by the lights of the person who is given advice (and ignoring altruistic effects, which presumably are more heavy-tailed).

I think I probably agree with the general thrust of this comment, but disagree on various specifics.

'Intelligent people disagree with this' is a good reason against being too confident in one's opinion. At the very least, it should highlight there are opportunities to explore where the disagreement is coming from, which should hopefully help everyone to form better opinions.

I also don't feel like moral uncertainty is a good example of people deferring too much.

A different way to look at this might be that if 'good judgement' is something that lots of people need in their careers, especially if they don't follow any of the priority paths (as argued here), this is something that needs to be trained - and you don't train good judgement by always blindly deferring.

8
Dawn Drescher
Yeah, and besides the training effect there is also the benefit that while one person who disagrees with hundreds is unlikely to be correct, if they are correct, it’s super important that those hundreds of others get to learn from them. So it may be very important in expectation to notice such disagreements, do a lot of research to understand one’s own and the others’ position as well as possible, and then let them know of the results. (And yes, the moral uncertainty example doesn’t seem to fit very well, especially for antirealists.)
6
Jordan_Warner
I'd say that "Intelligent people disagree with this" is a good reason to look into what those people think and why - I agree that it should make you less certain of your current position, but you might actually end up more certain of your original opinion after you've understood those disagreements.

See also answers here mentioning that EA feels "intellectually stale". A friend says he thinks a lot of impressive people have left the EA movement because of this :(

I feel bad, because I think maybe I was one of the first people to push the "avoid accidental harm" thing.

"Stagnation" was also the 5th most often mentioned reason for declining interest in EA, over the last 12 months, when we asked about this in the 2019 EA Survey, accounting for about 7.4% of responses.

Thanks, David, for that data.

There was some discussion about the issue of EA intellectual stagnation in this thread (like I say in my comment, I don't agree that EA is stagnating).

Yeah, I think it's very difficult to tell whether the trend which people take themselves be perceiving is explained by there having been a larger amount of low hanging fruit in the earlier years of EA, which led to people encountering a larger number of radical new ideas in the earlier years, or whether there's actually been a slowdown in EA intellectual productivity. (Similarly, it may be that because people tend to encounter a lot of new ideas when they are first getting involved in EA, people perceive the insights being generated by EA as slowing down). I think it's hard to tell whether EA is stagnating in a worrying sense in that it is not clear how much intellectual progress we should expect to see now that some of the low hanging fruit is already picked.

That said, I actually think that the positive aspects of EA's professionalisation (which you point to in your other comment) may explain some of the perceptions described here, which I think are on the whole mistaken. I think in earlier years, there was a lot of amateur, broad speculation for and against various big questions in EA (e.g. big a priori arguments about AI versus animals, much of which was pretty wild and ill-informed). I think, conversely, we now have a much healthier ecosystem, with people making progress on the myriad narrower, technical problems that need to be addressed in order to address those broader questions.

6
Denise_Melchin
Thanks David, this is more or less what I was trying to express with my response to Stefan in that thread. I want to add that "making intellectual progress" has two different benefits: One is the obvious one, figuring out more true things so they can influence our actions to do more good. As you say, we may actually be doing better on that one. The other one is to attract people to the community by it being an intellectually stimulating place. We might be losing the kind of people who answered 'stagnation' in the poll above, as they are not able to participate in the professionalised debates, if they happen in public at all. On the other hand, this might mean that we are not deterring people anymore who may have felt like they need to be into intellectual debates to join the EA community. I don't know what the right trade-off is, but I suspect it's actually more important not to put latter group off.

I actually think the principles of deference to expertise and avoiding accidental harm are in principle good and we should continue using them. However, in EA the barrier to being seen as an expert is very low - often its enough to have written a blog or forum post on something, having invested less than 100 hours in total. For me an expert is someone who has spent the better part of his or her career working in a field, for example climate policy. While I think the former is still useful to give an introduction to a field, the latter form of expertise has been somewhat undervalued in EA.

4
Stefan_Schubert
I guess it depends on what topics you're referring to, but regarding many topics, the bar for being seen as an expert within EA seems substantially higher than 100 hours.

Lots of emphasis on avoiding accidentally doing harm by being uninformed

I gave a talk about this, so I consider myself to be one of the repeaters of that message. But I also think I always tried to add a lot of caveats, like "you should take this advice less seriously if you're the type of person who listens to advice like this" and similar. It's a bit hard to calibrate, but I'm definitely in favor of people trying new projects, even at the risk of causing mild accidental harm, and in fact I think that's something that has helped me grow in the past.

If you think these sorts of framing still miss the mark, I'd be interested in hearing your reasoning about that.

I'm somewhat sympathetic to the frustration you express. However, I suspect the optimal response isn't to be more or less epistemically modest indiscriminately. Instead, I suspect the optimal policy is something like:

  • Always be clear and explicit to what extent a view you're communicating involves deference to others.
  • Depending on the purpose of a conversation, prioritize (possibly at different stages) either object-level discussions that ignore others' views or forming an overall judgment that includes epistemic deference.
    • E.g. when the purpose is to learn, or to form an independent assessment of something, epistemic deference will often be a distraction.
    • By contrast, if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views.
      • On the other hand, I think literally everyone using the "average view" among the same set of people is suboptimal even for such purposes: it's probably better to have less correlated decisions in order to "cover the whole space" of reasonable options. (This might seem odd on some naive models of decision-making, but I thin
... (read more)

Something I want to add here:

I am not sure whether my error was how much I was deferring in itself. But the decision to defer or not should be made on well defined questions and clearly defined 'experts' you might be deferring to. This is not what I was doing. I was deferring on a nebulous question ('what should I be doing?') to an even more nebulous expert audience (a vague sense of what 'the community' wanted).

What I should have been doing instead first is to define the question better: Which roles should I be pursuing right now?

This can then be broken down further into subquestions on cause prioritisation, which roles are promising avenues within causes I might be interested in, which roles I might be well suited for, etc, whose information I need to aggregate in a sensible fashion to answer the question which roles I should be pursuing right now.

For each of these subquestions I need to make a separate judgement. For some it makes more sense to defer, for others, less so. Disappointingly, there is no independent expert panel investigating what kind of jobs I might excel at.

But then who to defer to, if I think this is a sensible choice for a particular subquestion, also needs to... (read more)

if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views

Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.

Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.

I think we disagree. I'm not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.

I do think it will often be even more valuable to understand someone's specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) in practice you only have so much time and thus can't fully pursue all disagreements.

So yes, always stopping at "person X thinks that p" and never trying to understand why would be a poor policy. But never stopping at that seems infeasible to me, and I don't see the benefits from always throwing away the information that X believes p in situations where you don't fully understand why.

For instance, imagine I pointed a gun to your head and forced you to now choose between two COVID mitigation policies for the US for the next 6 months. I offer you to give you additional information of the type "X thinks that p" with some basic facts ... (read more)

1
Ben Kuhn
I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial disagreement, based on your overall assessment of that particular person's credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims. If you are staking $5m on something, it's hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is "opinions diverge on this but the people I think are smartest tend to believe p." The reason I think this is usually bad is that (a) it's actually impossible to know how much weight it's rational to give someone else's opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades. As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob's "view" is much less positive than the rational aggregation of Bob and Carol's. It's interesting that you mention hierarchical organizations because I think they usually foll
9
Max_Daniel
I think I perceive less of a difference between the examples we've been discussing, but after reading your reply I'm also less sure if and where we disagree significantly.  I read your previous claim as essentially saying "it would always be bad to include the information that some person X is skeptical about MIRI when making the decision whether to give MIRI a $5M grant, unless you understand more details about why X has this view". I still think this view basically commits you to refusing to see information of that type in the COVID policy thought experiment. This is essentially for the reasons (i)-(iii) I listed above: I think that in practice it will be too costly to understand the views of each such person X in more detail.  (But usually it will be worth it to do this for some people, for instance for the reason spelled out in your toy model. As I said: I do think it will often be even more valuable to understand someone's specific reasons for having a belief.) Instead, I suspect you will need to focus on the few highest-priority cases, and in the end you'll end up with people X1,…,Xl whose views you understand in great detail, people Y1,…,Ym where your understanding stops at other fairly high-level/top-line views (e.g. maybe you know what they think about "will AGI be developed this century?" but not much about why), and people Z1,…,Zn of whom you only know the top-line view of how much funding they'd want to give to MIRI. (Note that I don't think this is hypothetical. My impression is that there are in fact long-standing disagreements about MIRI's work that can't be fully resolved or even broken down into very precise subclaims/cruxes, despite many people having spent probably hundreds of hours on this. For instance, in the writeups to their first grants to MIRI, Open Phil remark that "We found MIRI’s work especially difficult to evaluate", and the most recent grant amount was set by a committee that "average[s] individuals’ allocations" . See also this

If 100 forecasters (who I roughly respect) look at the likelihood of a future event and think it's ~10% likely, and I look at the same question and think it's ~33% likely, I think I will be incorrect in  my private use of reason for my all-things-considered-view to not update  somewhat downwards from 33%. 

I think this continues to be true even if we all in theory have access to the same public evidence, etc. 

Now, it does depend a bit on the context of what this information is for. For example if I'm asked to give my perspective on a group forecast (and I know that the other 100 forecasters' predictions will be included anyway), I think it probably makes sense for me to continue to publicly  provide ~33% for that question to prevent double-counting and groupthink. 


But I think it will be wrong for me to believe 33%, and even more so, wrong to say 33% in a context where somebody else doesn't have access to the 100 other forecasters. 

An additional general concern here to me is computational capacity/kindness-- sometimes (often) I just don't have enough time to evaluate all the object-level arguments! You can maybe argue that until I evaluate all the o... (read more)

2
Max_Daniel
I'm not sure if we have a principled disagreement here, it's possible that I just described my view badly above. I agree that one should act such that one's all-things-considered view is that one is making the best decision (the way I understand that statement it's basically a tautology). Then I think there are some heuristics for which features of a decision situation make it more or less likely that deferring more (or at all) leads to decisions with that property. I think on a high level I agree with you that it depends a lot "on the context of what this information is for", more so than on e.g. importance. With my example, I was also trying to point less to importance per se but on something like how the costs and benefits are distributed between yourself and others. This is because very loosely speaking I expect not deferring to often be better if the stakes are concentrated on oneself and more deference to be better if one's own direct stake is small. I used a decision with large effects on others largely because then it's not plausible that you yourself are affected by a similar amount; but it would also apply to a decision with zero effect on yourself and a small effect on others. Conversely, it would not apply to a decision that is very important to yourself (e.g. something affecting your whole career trajectory).
4
Linch
Apologies for the long delay in response, feel free to not reply if you're busy. Hmm I still think we have a substantive rather than framing disagreement (though I think it is likely that our disagreements aren't large).  Perhaps this heuristic is really useful for a lot of questions you're considering. I'm reminded of AGB's great quote: For me personally and the specific questions I've considered, I think considering whether/how much to defer to by dividing into buckets of "how much it affects myself or others" is certainly a pretty useful heuristic in the absence of better heuristics, but it's mostly superseded by a different decomposition: 1. Epistemic -- In a context-sensitive manner, do we expect greater or lower deference in this particular situation to lead to more accurate beliefs. 2. Role expectations* -- Whether the explicit and implicit social expectations on the role you're assuming privilege deference or independence.  So I think a big/main reason it's bad to defer completely to others (say 80k) on your own career reasons is epistemic: you have so much thought and local knowledge about your own situation that your prior should very strongly be against others having better all-things-considered  views on your career choice than you do. I think this is more crux-y for me than how much your career trajectory affects yourself vs others (at any rate hopefully as EAs our career trajectories affect many others anyway!).  On the other hand, I think my Cochrane review example above is a good epistemic example of deference. even though my dental hygiene practices mainly affect myself and not others (perhaps my past and future partners may disagree), I contend it's better to defer to the meta-analysis over my own independent analysis in this particular facet of my personal life. The other main (non-epistemic) lens I'd use to privilege greater or lower humility is whether the explicit and implicit social expectations privilege deference or  independence. F
4
Max_Daniel
Thanks! I'm not sure if there is a significant difference about how we'd actually make decisions (I mean, on prior there is probably some difference). But I agree that the single heuristics I mentioned above doesn't by itself do a great job of describing when and how much to defer, and I agree with your "counterexamples". (Though note that in principle it's not surprising if there are counterexamples to a "mere heuristics".) I particularly appreciate you describing the "Role expectations" point. I agree that something along those lines is important. My guess is that if we would have debated specific decisions I would have implicitly incorporated this consideration, but I don't think it was clear to me before reading your comment that this is an important property that will often influence my judgment about how much to defer.
7
richard_ngo
I think that in theory Max is right, that there's some optimal way to have the best of both worlds. But in practice I think that there are pretty strong biases towards conformity, such that it's probably worthwhile to shift the community as a whole indiscriminately towards being less epistemic modest. As one example, people might think "I'll make up my mind on small decisions, and defer on big decisions." But then they'll evaluate what feels big to them , rather than to the EA community overall, and thereby the community as a whole will end up being strongly correlated even on relatively small-scale bets. I think your comment itself actually makes this mistake - there's now enough money in EA that, in my opinion, there should be many $5M grants which aren't strongly correlated with the views of EA as a whole. In particular, I note that venture capitalists allocate much larger amounts of money explicitly on anti-conformist principles. Maybe that's because startups are a more heavy-tailed domain than altruism, and one where conformity is more harmful, but I'm not confident about that; the hypothesis that we just haven't internalised the "hits-based" mentality as well as venture capitalists have also seems plausible.

(My best guess is that the average EA defers too much rather than too little. This and other comments on deference is to address specific points made, rather than to push any particular general takes).

Maybe that's because startups are a more heavy-tailed domain than altruism, and one where conformity is more harmful

I think this is part of the reason. A plausibly bigger reason is that VC funding can't result in heavy left-tails. Or rather, left-tails in VC funding are very rarely internalized. Concretely, if you pick your favorite example of "terrible startup for the future of sentient beings," the VCs in question very rarely get in trouble, and approximately never get punished proportional to the counterfactual harm of their investments. VC funding can be negative for the VC beyond the opportunity cost of money (eg via reputational risk or whatever), but the punishment is quite low relative to the utility costs. 

Obviously optimizing for increasing variance is a better deal when you clip the left tail, and optimizing for reducing variance is a better deal when you clip the right tail.

(I also independently think that heavy left tails in the utilitarian sense are probably less common in VC funding than in EA, but I think this is not necessary for my argument to go through).

8
richard_ngo
Good point, I agree this weakens my argument.
8
Max_Daniel
I agree it's possible that because of social pressures or similar things the best policy change that's viable in practice could be an indiscriminate move toward more or less epistemic deference. Though I probably have less of a strong sense that that's in fact true. (Note that when implemented well, the "best of both worlds" policy could actually make it easier to express disagreement because it clarifies that there are two types of beliefs/credences to be kept track of separately, and that one of them has to exclude all epistemic deference. Similarly, to the extent that people think that avoiding 'bad, unilateral action' is a key reason in favor of epistemic deference, it could actually "destigmatize" iconoclastic views if it's common knowledge that an iconoclastic pre-deference view doesn't imply unusual primarily-other-affecting actions because primarily-other-affecting actions depend on post-deference rather rather than pre-deference views.) I agree with everything you say about $5M grants and VCs. I'm not sure if you think my mistake was mainly to consider a $5M stake a "large-scale" decision or something else, but if it's the former I'm happy to concede that this wasn't the best example to give for a decision where deference should get a lot of weight (though I think we agree that in theory it should get some weight?).
4
MichaelA🔸
I strongly agree that "the optimal response isn't to be more or less epistemically modest indiscriminately", and with the policy you suggest. If I recall correctly, somewhat similar recommendations are made in Some thoughts on deference and inside-view models and EA Concepts: Share Impressions Before Credences.
3
Timothy_Liptrot
I disagree Max. We can all recall anecdotes of overconfidence because they create well-publicized narratives. With hindsight bias, it seems obvious that overconfidence was the subject. So naturally we overestimate overconfidence risks, just like nuclear power. The costs of under confidence are invisible and ubiquitous. A grad student fails to submit her paper. An applicant doesn't apply. A graduate doesn't write down her NGO idea. Because you can't see the costs of underconfidence, they could be hundreds or thousands of times the overconfidence costs. To break apart the question 1. Should people update based on evidence and not have rigid world-models. Is people disagreeing with you moderate evidence? * Yes to both 1. Once someone builds the best world-model they can, should they defer to higher-status people's models * Much much less often than we currently do 1. How much should we weight disagreement between our models and the models of others? * See Yud's book: https://www.lesswrong.com/posts/svoD5KLKHyAKEdwPo/against-modest-epistemology

While reading this post a few days ago I became uncomfortably aware of the fact that I made a huge ongoing mistake over the last couple years by letting myself not put in a lot of effort into developing and improving my personal career plans. On some level I've known this for a while, but this post made me face this truth more directly than I had done previously.

During this period I often outright avoided thinking about my career plans even though I knew that making better career plans was perhaps the most important way to increase my expected impact. I think I avoided thinking about my career plans (and avoided working on developing them as much as it would have been best for me to) in part because whenever I tried thinking about my career plans I'd be forced to acknowledge how poorly I was doing career-wise relative to how well I potentially could have been doing, which was uncomfortable for me. Also, I often felt clueless about which concrete options I ought to pursue and I did not like the feeling of incompetence this gave me. I'd get stuck in analysis paralysis and feel overwhelmed without making any actual progress toward developing good plans.

It feels embarrassing to admit t... (read more)

This comment made me very happy! If you think you would benefit from talking through your career thoughts with someone and/or be accountable to someone, feel free to get in touch.

8
Kirsten
William, I just want you to know that it's really great that you're going to have more impact by thinking through your career plans. Good luck!
4
MaxRa
I relate to what you wrote a lot, thanks for sharing.

Thank you for writing this up! I think it's helpful to see how the different eras of EA advice has played out in people's actual decisions.

Thank you! I really appreciate people writing up reflections like this.

Thanks for writing and sharing your insights! I think the whole EA community would be a lot healthier if people had a much more limited view of EA, striving to take jobs that have positive impact in the long  run, rather than focussing so much on the much shorter-term goal of taking jobs in high-profile EA organisations, at great personal expense.

I share the view that a lot of EAs probably focus much too much on getting roles at explicitly EA organisations, implicitly interpret "direct work" as "work at an explicitly EA orgs", should broaden the set of roles and orgs they consider or apply for, etc. And obviously there are many roles outside of explicitly EA orgs where one can have a big positive impact. I think highlighting this is valuable, as this post and various other posts over the last couple years have, as 80,000 Hours tries to do, and as this comment does.

That said, I disagree with part of what you're saying, or how it can be interpreted. I think that there are people for whom working at explicitly EA organisations is probably the most impactful thing they can do, and people for whom it's probably the most enjoyable and rewarding thing they can do, and people for whom both of those things are true. And this can be true in the long-term as well as the short term. And it's hard to say in advance who will be one of those people, so it's probably best if a substantial portion of the unsure people apply to roles at EA and non-EA orgs and see what happens.

So I don't think taking jobs in EA orgs (high-profile or not) nece... (read more)

And applying for jobs in EA orgs also doesn't have to come at great personal expense

I want to push back against this point a bit. Although I completely agree that you shouldn't treat working at non-EA orgs as a failure!

In my experience, applying for jobs in EA orgs has been very expensive compared to applying to other jobs, even completely ignoring any mental costs. There was a discussion about this topic here as well, and my view on the matter has not changed much - except I now have some experience applying to jobs outside EA orgs, backing up what I previously thought.

To get to the last stage of a process in the application processes I went through at EA orgs routinely took a dozen hours, and often dozens. This did not happen once when I applied to jobs outside of EA orgs. Application processes were just much shorter. I don't think applying to EA jobs as I did in 2018 would have been compatible with having a full-time job, or only with great difficulty.

Something I also encountered only in EA org application processes were them taking several months or being very mismanaged - going back and forth on where someone was in the application process, or having an applicant invest dozens of hours only to inform them that the org was actually unable to provide visas.

Interesting. For what it's worth, that doesn't match my own experience. (Though I'm of course not contesting that that was the case for you, and I don't know whether your or my experiences would be more representative of the experiences other people have had/would have.) I'll share my own experience below, in case the extra data point is useful to anyone.

  • I found some EA org application processes quite quick, some moderately long, and some quite long, and found the same for non-EA org application processes. I don't think there was an obvious difference in average time taken for me.
    • I think the two longest application processes I've gone through were for the Teach For Australia program (which I got into; this was before I learned about EA) and the UK Civil Service Fast Stream (where I got to the final stage but ultimately didn't receive an offer)
      • Though some of my EA org application processes did come close to the length of those two
  • I did these applications while doing a full-time job
    • Though I'm a sort of person who for some reason finds it natural to do work-like things for long hours (as long as I feel fairly autonomous in how I do this), which I know won't be true for everyone
  • From me
... (read more)
9
Kirsten
For anyone else reading this, the UK Civil Service Fast Stream is a 4+month application process that at one point requires you to take a day off work and travel in to do work tests; it's a leadership programme with an application process that's much more time-consuming than anything I've ever applied for before or since.
4
MichaelA🔸
Yeah, thanks for giving that extra context. The Teach For Australia application process is quite similar. 
7
Aaron Gertler 🔸
I've also experienced the average EA org application process to be a bit more disordered/chaotic than the average non-EA job I applied to (though I was also ghosted for weeks after a final-round interview with a major financial firm). I expect this is almost entirely a function of org size, but it's still unfortunate and something I hope is changing. As far as time required by application processes, I had a very different experience while applying to many orgs, leaving me unsure of how the "average" process compares to the average private-sector process. I'm glad to see you re-sharing your previous comment about this; I'll link to my previous reply so that people can have a bit more data.
5
Jakob_J
Hi Michael, thanks for your reply! I agree with everything you are saying, and I did not mean to imply that people should not consider working at explicit EA organisations. Indeed, I would also be interested at working at one of them at some point! The point I wanted to make is that the goal of "getting a job at an EA organisation" in itself is a near-term career goal, since it does not answer many of the questions choosing a career entails, many of which have been highlighted in the post above as well as by 80,000 hours. I am thinking of questions like: How do I choose a field where I would both enjoy the work and have an impact? How do I avoid significant negatives that would stop me having a meaningful and happy life and career? How do I build the skills that make me attractive in the field I want to work in? Of course, we'll never get everything right, but this is a more nuanced view than focussing all your efforts on getting a job at an EA organisation. I would also like to see more discussions of "hybrid" careers, where one for example builds a career as an expert in the Civil Service and then joins an EA organisation or acts as an advisor during a one year break to exchange experiences.

Congrats on the promotion! (after just 6.5 months? Kudos) Also thanks for the case study. I think as you pointed out, this is a bit different from some of the common advice, so it's particularly useful.

Thank you so much for writing and sharing this! It’s really useful to get such a thoughtful perspective from someone with long experience of seeking an impactful career. 

A couple of the pieces that feel particularly important to me, and actionable for others, are trying to get a good grade in your degree and applying for a broad range of jobs. I would guess it’s a pretty good rule of thumb that doing well at high school and in your degree is going to set you up well for having a broad range of options (though I might be biased by my natural risk aversion). I’d also guess that for most people they should be pushing themselves to apply for more roles than they’d naturally be inclined to. Doing that seems really hard given how horrible applying for jobs is, and that it feels like the way to do well in a particular job process is to get really invested in that specific job, which feels like it precludes applying for many at once. 

I’m sad that you found the only way to feel ok about broadening your job search was to distance yourself from EA. It really seems like we’re doing things wrong as a community when we’re not only failing to support people but actively making their liv... (read more)

Hi Michelle,

Sorry for being a bit slow to respond. I have been thinking about your question on how the EA community can be more supportive in situations I experienced, but struggled to come up with answers I feel particularly confident in. I might circle back to this question at a later point.

For now, I am going to answer instead what I suspect would have made me feel better supported while I was struggling to figure out what I should be doing, but I don't feel particularly confident:

i) Mentorship. Having a soundboard to talk through my decisions (and possibly letting me know that I was being silly when I felt like I wasn't allowed to make my own decisions) might have helped a lot.

ii) Having people acknowledge that I maneuvered myself into a position that wasn't great from the perspective of expected impact, and that this all kind of sucked.

That said, for the latter one, the bottleneck might have been me. I had quite a few people who I thought knew me well express surprise at how miserable I felt at the time, so apparently this was not as obvious as I thought.

I would expect my first suggestion to generalise, mentorship is likely very useful for a lot of people!

I had a lot of contac... (read more)

7
AGB 🔸
I think opinions on how to do better are rather sensitive to broader cause prioritisation and 'what should the movement look like?' questions, but some of Denise's previous writing gives insight into her personal thoughts here, which I happen to agree with. I particularly note the following quote:  
5
MichaelPlant
Fairly minor thing in a big comment, but I'm curious about whether this works if people do this. My own limited experience, and that of a few friends, is that we only got the jobs/roles we really wanted in the end. I wonder if this is because we lacked intrinsic motivation and were probably obviously terrible candidates for the things we were trying ourselves excited for. In my case, I tried to be a management consultant after I did my postgrad and only applied for PhDs because I bombed at that (and everything else I applied for).  
2
Ann Garth 🔸
One data point: I recently got a job which, at the time I initially applied for it, I didn't really want (as I went through the interview process and especially now that I've started, I like it more than I thought I would based on the job posting alone).

Thanks for this post. It's really interesting to hear your story and analysis, and I'm really glad you've recently been much happier with how things are going!

When reading, I thought of some other posts/collections that readers might find interesting if they want to dive deeper on certain threads. I'll list them below relevant quotes from your post.

I prioritised being cooperative and loyal to the EA community over any other concrete goal to have an impact. I think that was wrong, or at least wrong without a very concrete plan backing it up.

On this, posts tagged Cooperation & Coordination may be relevant.

I put too much weight on what other people thought I should be doing, and wish I had developed stronger internal beliefs. Because I wanted to cooperate, I considered a nebulous concept of ‘the EA community’ the relevant authority for decision-making. Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.

On this, posts tagged Epistemic Humility may be relevant.

I did not think enough about how messages can be distorted once they travel throu

... (read more)
7
Michelle_Hutchinson
Thanks - so useful to list specific things people could read to follow up!

I love this post. Thanks so much for sharing.

In my first year after finishing my degree and post-graduation summer internship, I only applied to ~7 roles, exclusively at EA organisations. That is way too small a number for anyone who actually wants a job!

Yeah, I think this is a really good point to highlight. Personally, I applied for ~30 roles in 2019 (after hearing about EA in 2018, and while working in a job that wasn't especially high-impact). And I ended up with 2 job offers. So in some naive sense, I'd have been predicted to only get ~0.5 (7/30) offers if I only applied for 7 roles. 

Being miserable also hindered my ability to assess my prospects rationally. It took me a long time to properly give up on my EA job prospects: whenever I thought this path might not work out for me at all, the thought was too horrifying to seriously contemplate.

I'm really glad to hear that your change of direction (into UK civil service) seems to be going better and making you happier. And I definitely don't want to second-guess that. And I think only a minority of EAs should be primarily aiming for roles in explicitly EA orgs, and that even much of that minority should consider and apply for roles elsewhere (see also my other comment).... (read more)

Thank you for pointing that out, I agree candidates should not consider such a low number of applications not resulting in full-time offers strong evidence against them having a chance.

I am not sure whether the question of whether one has a chance at an 'EA job' is even a good one however. 'EA jobs' are actually lots of different roles which are not very comparable to one another. They do not make for a good category one should be thinking about - but rather about which field someone might be interested in working in, and what kind of role might be a good fit. Some of which may so happen to be at EA orgs, but most will not.

Also, I appreciate I did not clarify this in the post, but I did not get rejected from all 7 roles I applied to in 2018 - I got rejected from 5, dropped out of 1 and could not do the 3-month trial stage I was invited to for visa reasons 1 time.

Take pleasure in the irony that your "biggest mistakes", and the way you recognise and write about them so candidly, are actually incredibly impactful through the lessons they teach others (and yourself). This post has certainly induced some self-reflection for me. Thanks!

Thanks for writing this post :)

It seems like one of the main factors leading to your mistakes was the way ideas can get twisted as they are echoed through the community and the epistemic humility that turns into deference to experts. I especially resonated with this: 

I did plenty of things just because they were ‘EA’ without actually evaluating how much impact I would be having or how much I would learn.
 


As a university organizer, I see that nearly all of my experience with EA so far is not “doing” EA, but only learning about it. Not making impac... (read more)

Also a big thank you from my side. It really feels like an open and honest account and to me it seems to shine the light on some very important challenges that the EA community faces in terms of making the best use of available talent. I hope that your story can inspire some voices in the community to become more self-reflective and critical about how some of the dynamics are playing out, right under our own noses. For a community that is dedicated to doing better, being able to learn from stories like yours seems like an important requirement.

In this light, I would love to see comments (or even better follow-up posts) address things like what their main takeaways are for the EA community. What can we do to help dedicated people who are starting out to make better decisions for themselves as well as the community?

Thanks for making this. I experienced similar struggles in young adulthood and watched my friends go through them as well. It sucks that so many institutions leave young people with bad models of job-world and inflexible epistemology. It hits when we most need self-reliance and accuracy. IMO, the university professors are the worst offenders.

My disorganized thoughts

  1. Trusting your own models is invaluable. When I was 25 I had a major breakdown from career stress (read about it here). I realized my choices had been unimpeachable and bad. I always did "what I was supposed to do". But the outcomes would have been better if I trusted my own models.

There aren't consolation prizes for following socially approved world models. There are just outcomes. I decided to trust my own evidence and interpretation much more after that.

  1. It stuns me how often young people under-apply for jobs. The costs of over-applying are comparatively small. How to talk my friends out of it?

  2. I'm not sure you took risks, in an emotional sense. Under-applying protects against rejection. Loyalty protect against abandonment. In the moment, applying to an additional job and exploring a new career path feel very

... (read more)

Have you considered that if you had gotten an EA job, your impact would have only been the difference between what you can accomplish in that job and what the next best person could have accomplished? As so many people apply to EA jobs, this difference would probably not be very big. 

By getting a job outside of EA and donating, your impact would be almost completely additive as it is unlikely that the next best person that would have gotten the job would have been EA and donated themselves.

Especially if you have high earning potential this effect should be  large.

This puts to words so many intuitions that have crept up on me—not necessarily so much about EA as about job-hunting as a young generalist with broadly altruistic goals—over the last few years. Like you, earlier this year, I transitioned from a research role I didn’t find fulfilling into a new position that I have greatly enjoyed after a multi-year search for a more impactful and rewarding job. During my search, I also made it fairly deep in the application process for research roles at GiveWell and Open Phil and got a lot of positive feedback but was ulti... (read more)

I have a friend who is making the first two mistakes. They are in a different field from EA but similar totalizing vibe. They rarely apply to jobs that are outside their field-role but which would provide valuable career capital. They are also quite depressed from the long unemployment.

What can I say to help them not make these mistakes?

At what point did you realize you regretted not continuing your political work? At that point what stopped you from re-engaging?

3
Denise_Melchin
Definitely in 2017, possibly earlier, although I am not sure. I went to the main national event of my political organisation in autumn 2017, after not having been a few years. I could not generally re-engage as I moved countries in 2016. Unfortunately, political networks don't cross borders very well.

This introspective reflection is excellent. How vulnerable you had to be willing to be to share all this. Your focus was on getting a job. However, as a business owner, your points are generally applicable to anyone wanting to grow in their space and make an impact on those they serve. Thank you for sharing.

One thing I'm not sure of (and because this is a thorny question want to ask as lightly as possible) is if you feel like you've settled into your UK Civil Service job? Alternatively, maybe you see it as a means to something more effective down the line, or perhaps you think working there just is fully, immediately effective? I see both strands in what you've said, and I suppose my rough observations are:

  • It seems like maybe it's not because you've reopened cause prioritization considerations and seem to be working towards exploring other options now
  • It seems
... (read more)
2
Denise_Melchin
First I apologize for my late response! I completely agree with you that being in a limbo state is the least effective place you can be! Exploring is valuable, but at some point you have to act what you have learnt. Even if what you learnt was really not what you were hoping to learn... My perspective is that I can still have a major impact via donations. The more I earn, the more I can donate. The more frugal I live, the more I can donate too. Unfortunately the EA Community is no longer as supportive of people who see their primary way to impact via donations as it once was. I don't think I would have come to my current perspective if I had joined the EA Community in recent years. But Giving What We Can is ramping up again and holding some events if this is a path you might be interested in. I am still working in the UK Civil Service and have worked here for 3.5 years by now. I do consider the direct impact of my work in the Civil Service to be trivial compared to the donations I can make thanks to my earnings. I have increased my pay by ~135% compared to when I started (not inflation-adjusted). How much this has increased my donations is a bit harder to say as my finances and donations are mingled with my husband's. I do not consider myself settled as I expect my earnings to tap out now. My original plan was to switch to the private sector this year, but this has been tricky as tech is having a downturn. All my Civil Service roles have been data/tech roles. I also considered some other direct work options this year, but there were very few I was interested in (both due to poor fit as well as doubts over their actual impact) and none of them panned out. Hope this helps and feel free to reach out anytime. I am sorry you are in this position.
[anonymous]1
0
0

If an EA program could retroactively pay Denise for all her volunteering work, would Denise be able to continue her career within EA? I think there is a good chance yes. Such retroactive payments could then happen periodically and we get to keep a dedicated person whose work may have very high impact (OpenPhil research analyst level).

5
Denise_Melchin
I’m sorry I’m only getting to this comment now: I would like to clarify that the reason I started to work outside the EA sphere was not exclusively financial. I decided against exploring this, but I had some suggestions for a generic grant in my direction. The work I did as a research assistant was also on a grant. I much prefer a “real job”, and as far as I can tell, there are still very few opportunities in the EA job market I’d find enticing. I care about receiving plenty of feedback and legible career capital and that’s much easier as part of an organization. (But if someone wants to pay me six figures to write blogposts, they should let me know!) I’m also a bit confused by your framing of “getting to keep” me. I am right here, reading your comment. :)
Curated and popular this week
Relevant opportunities