When I talk to random people around my town, and ask how they're doing, a large fraction of them open with "well, the world sucks, but..."
They do this whether or not they, personally, are about to report that they're doing well. It feels like a necessary caveat to them. To me, this is a sign that they're in a social context that cuts against their thriving. Does the world suck? Very hard to say with confidence. The world is very big, and there's no clear objective standard. But carrying around "the world sucks" is heavy, and makes it harder to enjoy life.
So I'd like to tell them to consider just... not thinking that anymore? The belief probably isn't paying rent. It's probably not rigorously considered. Just this cached thing, sitting around, making life slightly worse.
But I don't, because they're random acquaintances and their world models are their business. Same with you, reader of this post. But I'm going to make a recommendation to you, anyway. Consider leaving the organized EA movement for a while.
Wait, what?
If your reaction is "but this movement is great! I'm really happy and energized and ready to make the biggest difference I can with all my new exciting friends" then yeah, keep doing your thing! I probably don't have anything to offer you. And for what it's worth, I also agree the movement is great, and I am personally somewhat involved right now.
But I think it's pretty clear that lots of people in the EA orbit are persistently unhappy. And the solution to being persistently unhappy with a social arrangement or memeplex, usually, is to leave.
Famously, this often doesn't occur to the person suffering for a long time, if ever, even if it looks like the obvious correct choice from the outside.
What do I mean by leaving?
I don't mean "stop caring about doing good". If you've taken the Giving What We Can pledge, for example, I'd definitely keep it. What I mean is something more like "stop expecting EA to provide for any of your core needs." This includes, at least:
- Social - have non-EA friends. Ideally have some be local. Talk about other stuff with them, mostly.
- Financial - do not rely on EA funding sources for income that you couldn't do without. Don't apply for EA jobs.
- Emotional - do not have a unidimensional sense of self-worth that boils down to "how am I scoring on a vague, amorphous impact scale as envisioned in EA terms".
Actually, that's probably about it. If you're persistently unhappy, and don't feel fulfilled socially, financially, or emotionally, stop doing those three things.
What are the upsides?
First, consider yourself. It's easy to take this idea too far, but the person you have the greatest responsibility for - except perhaps your children or in some niche situations your spouse - is yourself. You have direct access to your emotional state. If you're not flourishing, it is first and foremost your responsibility to address that. And addressing it is worth something, even in the EA framework.
Second, consider bias. I think there's a common story which goes something like:
Well I need to make sure I make a good impression on people at orgs X and Y, since I might want to work for them. But they have close relationships with Z and A. So I need to be rubbing elbows with those people, too. To do that it'll help for me to live in the most expensive city in the world, which makes it really important I don't irritate C and D, because I'll need substantial funding just to not burn through savings.
I didn't properly appreciate this for a long time, but I keep seeing people be nervous to like, seriously criticize EA, because their (aspirational) livelihoods depend on it.
This is very bad for epistemology. It may be the single worst thing for epistemology, actually. People who feel socially precarious believe the dumbest stuff for basically no reason, indefinitely. The solution is usually to make tracks. Give the cool kids a few chances to let you in. If they don't, or they don't treat you how you'd like, vote with your feet. And later, if you do feel like contributing to discourse, you'll do it with a much clearer mind.
Third, consider value of knowledge. It's not actually very hard to get back into EA after a while. Don't burn any bridges or be a jerk to anyone, but that should go without saying. If you simply log off for a while, and you ever get the itch again, it's easy to return. And indeed, I think that's the best way to return. Consider two people:
Person One: lives in the Bay Area and has had a few internships for EA-ecosystem organizations. Can't seem to break in to a more stable and serious employment situation. Has good credentials but has been loosely in EA since college and has never applied for a job outside the informal EA network. Is struggling with significant mental health challenges. Has bursts of manic energy and gets excited about projects, but loses initiative when nobody really supports them.
Person Two: lives in Omaha, and is married to a spouse without much interest in EA, and has two children. Did EA stuff for a while, and gives 5% each to CWF and SCI. Is kind of tired of their job and considering other opportunities in general, so takes a peek at the EA Forum and 80k Hours board to see if anything remote might be up their alley.
Person Two is plausibly a better candidate, and certainly not obviously a worse one. And more importantly, they can browse with high confidence that this is what they actually want, and not simply them being afraid to leave the only socioeconomic niche they've ever known as an adult. If you cultivate the ability to easily support yourself totally separately from EA, the desire to engage with EA is no longer tangled up with your basic survival drive. Clearer, crisper motivations are valuable in their own right, and instrumentally good too.
What are the drawbacks?
You might have less impact.
Or you might not! The people who seem to have had the greatest impact in history - Bourlag and Petrov, for instance - just sort of were in the right place at the right time with the right interests.
The paradigm of evaluating impact in EA is community specific. Does it track reality? Depends on a bunch of underlying assumptions. On what level do you really believe those assumptions in an unbiased, clear-thinking way? The best way to know that is to get away from the social community based around them, and see if they still ring true. Maybe some will, and some won't.
But social communities have a way of reinforcing themselves. It's sensible that peripheral EA activities intuitively make sense to random people outside the bubble - nobody is terribly confused if I mention that I donate some money to provisioning malaria nets. But it's interesting that inner core EA activities like reducing AI risk or community building at top universities just seem... weird and onanistic from the outside. Does this mean they're invalid? Not at all! The general public is wrong about loads of stuff all the time. But it means maybe there's value in putting yourself outside, and see what you, personally, think from that vantage point.
How would you do it?
If you keep striking out for EA jobs, just tap into your other social networks - possibly through family if you don't have many non-EA friends, and play the ordinary applying for jobs game. Referrals are good. Hold one for a few years, do well, and you may be surprised what that does for your mental health and perspective.
If you're more concerned with diversifying socially, go to whatever events will have you, including maybe public ones. Maybe bring something - one thing I did when I wanted more friends in my early 20s was just going to public house shows and always bringing a 6 pack of ciders to share with whoever I might meet. Most of the time I totally struck out. But eventually I met someone with a group of friends and piggybacked into it. Ten years later that group still contains about half my local friends.
But my advice doesn't really matter here. These are problems about which there's tons of advice out there, both publicly and from whoever you most trust. And if everyone you most trust is in EA, that's another sign it'd be good to diversify!
Listen, with reservations, to your gut
I think "listen to your gut" is overdone, especially if you're anxious or obsessive.
That being said, you probably have some instinct of whether involvement with organized EA is making you unhappy. And you can calculate whether it's doing much of anything for you apart from influencing your ambient emotional state. If the answers to those questions are "yes" and "no", it's probably time to hit the bricks for a while.
If they aren't, great! It's a good community for many people. But things that would be red flags in any community are still red flags here. Like if you notice yourself thinking:
- I'm not good enough to contribute
- I'll never be invited to do really important stuff
- Whatever I can do here, it's better than anything that isn't here by definition
- Being part of this movement is who I am
Those are bad signs. And they're bad signs even if you have counterarguments ready. The counterarguments may well prevail! But their presence is not in itself an invalidation of the "these feelings mean it's time to get distance" heuristic.
And again, you can come back! Taking breaks is underrated at almost every level. So if you think it might be heathiest for you to leave, do yourself a favor and actually think it through.
Thanks for writing this!
A few thoughts:
You touch on this briefly at the end, but I think what is missing from these “consider leaving EA posts” is what one might do before getting to the point where leaving is the best option. What might some early warning signs be, and what are some less intense measures someone might wish to consider prior to leaving entirely?
I wonder what we can do as a community to make these sorts of considerations easier on people. How might we be able to build structures/processes that can help people before it gets bad?
I think you might be underestimating the difficulty of coming back to the community after leaving. A few things that might be difficult about returning are (a) feeling like you’re out of the epistemic loop, (b) feeling weird about suddenly reappearing when people might notice you’ve been gone, and (c) needing to make life decisions while you’re gone (finding new jobs, moving, etc).
I suppose all this is quite n=1 though! It's just worked well for me. But mileage may vary.
Thanks for considering this. It does seem possible to me that more people should change how they interact with EA.
I feel a little surprised by you describing this as "leaving" though – I think basically everyone I know in EA has "left", by your definition. Do you have a different experience?
E.g. I personally meet all three of your criteria: my friends are mostly non-EA's, even though I work at an EA organization I don't rely on it for income, and my sense of self worth is not wholly tied to impact. But I would feel pretty surprised if someone described me as having left EA?
Ha, this is interesting. There might be something around "growing up in EA" where you get to a more grounded place after going through a pretty normal "explore with tons of energy" phase.
Though relative to you I seem much more invested in EA and leaning on it to provide a lot for me. So far, so good!
One reservation I have about people leaving the EA community is that they might be exactly the kind of people that EA needs. The fundamental project of EA is using reason, science, philosophy, and other epistemological tools to discover how we can use the resources we have to do the most good, and then acting on the information we develop.
The EA community operationalizes this project, and adopts different subordinates values. These subordinate values often strongly affect the feel and the environment of EA. But it is critical that EA has people to challenge the subordinate values and potentially are not lower-level aligned, so they can contribute to the discourse as to the fundamental EA project.
I realize this comment is a bit nonresponsive, because this post pertained more to one who is not getting needs satisfied by EA, not one who disagrees with some aspects of the community.
People who are not perfectly satisfied with EA are more likely to have some disagreements with what they might percieve as EA consensus. Therefore, recommending that they leave directly decreases the diversity of ideas in EA and makes it more homogeneous. This seems likely to lead to a worse version of EA.
I don't think that the first premise is necessarily correct
Agreed. People should not base their entire social network, financial security, and self-worth on a single community.
When you described "stop expecting EA to provide for any of your core needs" I instantly thought "Hmmm. Are there really people who do that?" I've read about type of thing online, but is this a caricature or are there really some people who have their whole life wrapped up in EA? (in which people have only friends in EA, work only in EA, and base their emotional self-worth only on EA)
I've purposely built my engagement with EA in line with the principles you wrote:
In this way I think I'm fairly permanently 'out' of EA. But I think I get the best of both worlds like this. I have a foot in enough to make impact and to be generally considered an EA, but if I was unceremoniously booted out of EA tomorrow it wouldn't be catastrophic. Or even significantly affect my life that much.
11/10 would recommend
Seconded!
I really liked the tone of this relative to posts on similar topics, felt more grounded and less panicked. (Maybe the others were right to be panicked, but this one resonated with me better).
I'm not sure that the 'bad signs' are always signs you should leave. For example the first one:
"I'm not good enough to contribute"
I personally have always thought I'm not good enough at whatever I'm trying to accomplish, and I doubt that'll ever stop completely. Sometimes I'm proven wrong in that belief, sometimes not. So even though I catch myself thinking that I don't think this is a sign I should leave the movement :)
These are just random and provocative thoughts.
Maybe some of the people who you meet consider themselves with limited agency so express aggression and acknowledgement of the state of affairs? Or, they agree with the system that they live in, even if it allows some people to make profit (if the sentence continues 'but what can you do, you need to make people [like themselves] pay high rent since then if you are [their job] you can benefit from [what they do]'). Or, they acknowledge the sentiment which is that one could be interested in leaving their situation but express inability or limited opportunities to do so ('but it's the same everywhere'). Or, they perceive a 'threat' of being deprived of their privilege if they focus on improving global issues ('but it's actually ok'/'but I also have issues that I need help with'/'but improving the situation would require a change the way I think which is challenging').
Could you quantify or explain lots, persistently, and unhappy?
Are there any risks associated with leaving a system with ambitions of doing the most good which is suboptimal?
I am not an expert but is it that some therapies recommend re-interpretation of one's situation, communication, and if there are no other things to consider which could make it better to stay, leaving?
One interpretation is that once a person is aware of global issues, others' commitment to resolve them, their opportunities to participate (grants, jobs), and their marginal value in this effort (funding overhang), then they cannot leave the memeplex of doing the most good, even if they (temporarily) leave the community, because they are compelled by need.
Another interpretation is that people who learn about EA will always think about impact to some extent, even if they leave EA. It is because the ideas make sense and they would feel somewhat bad not considering them since they learned it is 'good' to do so.
Third way to think about one's perspective on leaving EA is that people learn about EA, participate for a while, and while acknowledging that they could do a lot of good, consider that they would be happier if they left and fully focused on other things. They have the 'approval' and can reason that systemic change that would make it so that all people have better impact is needed and that there are enough brilliant people who actually choose to stay for whatever reason.
People in all three categories can be happy or unhappy to different extents. One can suggest that keeping oneself physically healthy, having good relationships, being secure, treating and preventing any mental health issues, doing what one likes, and existing in an environment that one wants to be a part of can support one's happiness. A person who 'cannot leave' can thus be very happy while a person who 'leaves and forgets' very unhappy and vice versa.
Telling people to leave instead on supporting them with their issues can reduce the community's aspect of happiness based on their valuation of the system that they live in ;)
It can be argued that people who would be better of leaving should leave even if they were 'great unique assets' but that people should be supported with whatever they need, to a reasonable extent (e. g. taking advantage of the EA mindfulness program, Effective Self-Help, or working with some of the therapists and providers recommended by EA Mental Health Navigator or with the AI Safety Support Health Coach). Possibly, few people would argue that the relatively little resources spent on mental health support in the community are excessive. It could be great even if people use there 'free' resources and then leave the community, if they are thus happier. Of course, one can also take advantage of non-EA therapists and resources.
I agree that non-EA friends can be fun. Even a person who is (momentarily) highly influenced by thinking about impact, who would probably feel bad about hanging out with, for example, oil investor inconsiderate about the environment, can be fine with most people, and even learn that issues are not so dark or bleak as could be prominent in some EA narratives (e. g. learning from an engineer that some software is quite ok).
Why should one not apply for EA jobs? Is it to save themselves time which they might not have (e. g. would be sacrificing on social or financial)? Otherwise, applying for any jobs which are well compensated can make sense for financial needs.
But there is no one understanding of self-worth in EA, just like there is no clear definition of EA. Should there be one understanding? For example, something which would take into account self-care as well as contribution by various means, with respect to one's potential? Elliot Olds could be interested in this as he posted something about an impact list.
Hopefully their number is reducing? This piece to an extent sets an openness standard, the Future Fund has been gathering ideas, criticism, and ideas on how to gain criticism since the beginning, OPP is also asking what they can be better off funding, and Criticism is even contested in.
There is no coolness scale ..........
There was an article related to this. I hope EA content does not cause mania. Nothing on 'memetic content' was mentioned as a trigger.
It can be argued that one might not because what the EA community can offer for one's choice of 'high benevolence' is nothing, besides perhaps the community. It can be better if a very large majority of people want to be in the community or are somewhat in control of and supported in their in-and-out vacillations.
Hm, just a hypothesis, what could be causing any mania in EA could be highlighting heroes who were at the right place at the right time with an interest in impact and narrating opportunities in EA in a similar way. This issue can be mitigated by suggesting that these individuals were parts of institutions that made it so that they made the actions and decisions they did. Plus, that there are many people unacknowledged for their contributions, due to generally, social structures which motivate people to work for them as they seek status. It is really cooperation that makes representatives succeed.
For example,
So, the individuals who made it so that the person who told Petrov about the likely extent of the strike and other people who he encountered during his training which improved his decisionmaking abilities, and possibly his superiors and their superiors who made it so that Petrov trusted that he is supported in independent decisionmaking can all be credited for this decision.
One can suggest that this can be addressed. Narratives which make people seem unfriendly or weird can be reconsidered. For example, AI risk could be narrated as ameliorating human biases with respect to the law and improving human institutions based on biases detected by AI, but it would be a mistake to exclude the variety of other issues that AI safety addresses, such as the potential of and risks associated with an intergalactic expansion.
An argument against discouraging 'the general public' from paying attention to some higher-risk issues, such as AI safety or biosecurity, is that this could increase risk. When structures that support positive development with increased public interest are not in place, it is better that EA core activities are just not discussed much in public, due to emotions about them. This could be rationalized and perhaps thus improve any negative emotions.
Community building at top universities can be another example of a strategic consideration. It may be better to first include top problem solvers and only after people less good at it. A group started by non-top problem solvers could address issues relatively worse.
What do you mean by 'organized' EA? Are some resources/events/people more 'organized' than others? Could you share some examples?
One can agree. Is it that the first two could be addressed by therapy focused on emotions while the latter two by reason?