A big thank you to Frances Lorenz, Akash Wasil, Dewi Erwan, Jake McKinnon, Wim Howson Creutzberg, and George Rosenfeld for comments and feedback that monumentally improved the quality of this post.
TL;DR: We need to be more intentional about preserving good cultural norms in community building. We should be prudent about how our recruitment practices and approaches to movement growth will affect both short and long term community health, so that we can positively shape EA community culture in the years and decades to come.
Introduction
It’s old news that EA has recently received a big, big influx of money. A fraction of this funding goes towards supporting community building events and programming (e.g. dinners, discussion groups, EAGs, etc). This is a positive development, but with great power comes great responsibility. This post will attempt to map out some of our personal concerns about community epistemic health and concerns we have seen cropping up in EA spaces recently. We will propose a framework for how to think about long term community health, go over why we think it is being compromised, and finally provide some personal recommendations for how we think community building practices can be improved moving forward.
This post is not comprehensive and we plan to release further posts to capture specific ideas that we think deserve elaboration. As always, any feedback is much appreciated! Ultimately, our hope is that more dialogue about the importance of long term community health will shift existing community building practices or at least get community builders thinking more about long term health, such that EA can continue to grow sustainably.
What is “community health”?
EA has a couple of important core values. Here are a few examples:
- Epistemic humility
- Updating in response to evidence
- Open-mindedness
- Acknowledging inherent uncertainty
- Cause impartiality
- Truth-seeking
- Maximizing positive impact, broadly construed
These values are good; they make EA unique and capable of creating lasting change. As such, we should try to preserve them to the best of our ability. Of course, we should be wary of Goodharting EA community health. Our ultimate goal is to ensure that the movement is steering towards a good future, but optimizing for “future goodness” is vague, so lacking a concrete mechanism, here’s a fairly robust proxy:
Our framework for assessing community health/quality
- Whether the EA community is preserving good norms and epistemics (values)
- Internal perceptions of EA by its members (internal optics)
- External perceptions of EA by non-EAs (external optics)
This is a good proxy because even if we make amazing progress on solving direct problems, the EA movement will still need good epistemics to solve subproblems, to continue improving itself, and to perform cause-area/methods research. This requires EA to be epistemically rigorous and well-respected (i.e. to have good community health).
On shifting community culture
Community culture is inevitably shaped by the dispositions of its members. There are currently two main mechanisms of community building in EA which each attract a specific demographic:
- Talent search (high funding, mostly longtermist, large-scale selection of "elite, smart, capable, agentic individuals")
- Community groups (university clubs that try to expose as many people to EA ideas as possible, lower retention/absorption)
Our primary concern about certain dominant community building practices is how they will shift community culture in the long term. Currently, it seems like there is little to no intentionality about how community builders are shaping the future of the EA movement and what EA culture will/ought to look like in a few decades (this view was formed based on conversations with several prominent community members).
The failure mode of general community building is that it creates a reputation of frivolity with money, which leads to poor optics and adverse selection effects. The failure mode of talent search is that it fosters a culture of elitism and exclusion by visibly spending disproportionate time and energy on the "best of the best," which may cause ripples of resentment in the community.
This is not to say that EA shouldn’t change or adapt its culture as needed. EA culture will inevitably change, but current cultural norms are important and useful enough for us to think much more about the results of these changes and shape this cultural change with intentionality.
Because adhering to good cultural norms/existing EA axioms is a good heuristic for generating impact, we ought to consider how recruitment practices will affect the demographics, and thereby the values, of the EA movement long term (10-30 years). If current trends in spending and recruiting continue, it seems likely that much of what EAs currently like about the community will gradually become less common. In addition to the values listed above fading, it will become less approachable as more value-misaligned people enter the movement. Furthermore, properties we have tried hard until now to avoid like groupthink and moral arrogance will likely become more common. This, in addition to making the community less fun generally, will also lead to reduced effectiveness at solving direct problems.
To best mitigate the risks of this happening, it seems like there should be more thought placed into what we want EA to look like as a movement and what we want to keep/change about its current course. The simple fact that EA is a relatively small movement that now controls a bunch of resources means that any decisions regarding funding and talent distribution are shaping what EA and EA culture is, whether done purposefully or not. The community building decisions we are making now are forming the backbone of what EA culture will be 5, 10, 15 years down the line. This is a huge responsibility that we should keep at the front of our minds when we’re deciding how we want to target outreach and what we want to spend money on.
Given the problems that are arising now, continuing to let the movement “grow itself” seems like a bad idea. It seems important at this point in time for EA leaders and community builders to be more transparent with how they are making decisions and more assertive with their visions of what they would like EA to grow into.
Of course, the uncontroversial ideal is a community that perfectly balances ambition and rigor. We want a community that embodies good EA values but also makes amazing progress on direct problems. We want to consider all of the factors and make the right decisions. But what does this mean on the object-level? What do the decision-makers want to see more of and less of? What are community builders concretely building towards?
Tradeoffs must be made, and different people will have clashing ideas about what should be prioritized. While we know these are hard questions and have no doubt these questions are being debated privately all the way up the beanstalk, it seems like the current lack of centralized and outspoken public direction is leaving the movement vulnerable to exploitation.
Ways in which the EA movement may be getting worse
1. Epistemic erosion
We are defining epistemic erosion as when the core epistemics of EA collectively worsen. Epistemic erosion occurs when new selection pressures start pushing towards other values; for instance, generous community building funding selects more highly for people who value financial status and personal satisfaction. Financial incentives can also warp people’s cause prioritization; for instance, someone might be much more likely to subscribe to longtermism if they can receive substantial funding for their projects.
It’s likely that we care much more about the epistemics of the EAs in positions of power than the movement as a whole. However, the grantmakers in charge of funding and the most prominent researchers receiving funding are hardly immune to motivated reasoning and other biases. It’s also the case that many first interactions that new members have with EA is through general community members, and not the people at Open Philanthropy or Redwood Research. Visibly poor epistemics on this ground level of community building will almost certainly turn away some incredibly promising people.
Some consequences of worse epistemics; or, rationality 101
- Dissolution of trust
- This is critical, but doesn’t seem to be addressed in the recent posts related to this subject. Oliver Habryka and Mark Townsend wrote comments that we thought were quite good. TL;DR: people will be more motivated to be deceptive, which destroys the culture of trust.
- Having lower rigor for arguments that are personally beneficial
- Entrenchedness in existing cause areas
Several highly-upvoted posts on the Forum track similar shared sentiments and trends that suggest EAs are beginning to notice a deterioration of epistemics. Furthermore,
- Anecdata 1: a group that went to EAG London allegedly took their students to expensive Michelin restaurants and went clubbing instead of making the most of EAG
- Anecdata 2: a person lied about their involvement in a prominent university group to try to get funding from FTX (this was verified by the organizers of said university group)
At the very least, it seems like it is becoming easier for people to access EA resources for self-serving reasons.
2. External optics and internal optics
We also think maintaining good optics is important, even in situations when there may be short-term impact tradeoffs. We also want to broadly map out how the optics of EA are shifting as a result of the recent funding influx.
Why optics are important
- Bad optics turn away talented people who could potentially be very impactful
- Bad optics can select for opportunists and grifters:
- There’s a unilateralist’s curse situation here, where visibly poor projects or careless use of funding can destroy EA’s reputation. “It takes 20 years to build a reputation and 5 minutes to ruin it.”
- The EA community has a very high level of trust and it’s one of the things that makes it so special. If more opportunists and grifters join, then this will corrode our ability to work together.
- Demotivates existing EAs (damages internal optics)
- If the EA movement receives bad PR, then several EAs might start to question the resilience of the movement, and instead hedge their bets by pursuing a career that seems more stable
- Worse optics and opinions can also permeate to friend groups that people have outside of EA, which can increase alienation/burnout (this has happened to people I know)
- Increased odds of external resistance
- Generally, influencing large-scale public change will be hard if major publications put out hit pieces on EA, which might also lead to boycotts, policy resistance, etc.
How the funding situation can negatively impact optics
- Facilitates extravagant spending (e.g. first-class flights, expensive hotels/offices, fancy food) that appears contrary to EA values
- Unilaterialist’s curse exists for bad projects
- Ambitious projects fail or do harm after lots of money is spent
- People flaunt/showboat their funding
- Anecdata: at a focus university, a group was showing off that they went to lots of conferences for free
- Homogenous grant recipients make EA seem like a self-serving group
- Majority white, male, AI, Silicon Valley, etc.
In the long-term, bad optics weaken the impact of the EA movement as a whole. In addition to making it harder to get a foothold for outreach in mainstream circles, they can incentivize people who don’t hold EA values to join the movement while disincentivizing people who do.
A case study: why community health matters for AI alignment
There are several tangible and visceral reasons why we should be intentional about the future and health of the EA movement:
- If the movement has bad optics, many potential exceptionally promising AI researchers may be dissuaded from joining EA before they’ve even had the chance to try it out
- Poor incentives that encourage very talented individuals with bad epistemics to join the movement for monetary gain or status likely won’t be as diligent with their work and may even hinder progress if working on a team
- Laser-focusing funding and talent on one cause area makes you more vulnerable to groupthink and cause bias
- Selecting for a specific type of person leads to homogeneity of thought and important perspectives are missed (H/T George Rosenfeld)
On the other hand, we should still be wary of invisible mistakes; for instance, the costs associated with not making effective time-money tradeoffs can be quite high, such as losing out on outreach that could have convinced someone to become a highly engaged EA.
Still, the consequences of neglecting epistemic health and optics, in several cases, can outweigh and even reverse the progress provided by speedy movement growth and generous funding, especially when alternative options which account for community health without compromising significantly on impact are considered.
Conclusion and recommendations
We want to be impactful while maintaining good epistemics; to do so, we need to be more thoughtful about how our community building practices construct/erode norms and optics. In pursuit of this, here are some of our personal recommendations. There are tradeoffs to all of these, and we expect to encounter constructive disagreement, but we hope that giving our takes on what we would like to see more of will encourage others (especially those who disagree) to share their thoughts and contribute to a more open conversation about how we can go about shaping the future of the movement as a whole.
1. Foster a culture of long term thinking.
If you are a community builder, try and foster a culture of thinking about the EA movement on longer timelines (10-20 years) and consider how community-building decisions are contributing to your vision of the movement’s future (maybe through the use of BOTECs).
Counterpoint: We should be conscious of the tradeoffs involved with spending our energy and time in this way. In addition, perhaps it’s just really hard to come up with concrete thoughts and plans for the future, especially reliable ones. How do we decide which decisions will impact the future more positively, given the qualitative nature of this recommendation?
2. Be open and transparent with your work.
If you are a community builder (especially one with a lot of social status), be loudly transparent with what you are building your corner of the movement into and what tradeoffs you are/aren’t willing to make. Share your goals and process for achieving them.
This will provide two benefits:
- If there is agreement, more people will hop on board and thus expedite the process of EA growing positively
- If there is disagreement, it will hopefully foster productive dialogue between people who have different styles of community building
Counterpoint: There is a high degree of uncertainty among community builders about what EA “should” be, and perpetuating this culture may increase partisanship/divisiveness among EAs who have different approaches and goals.
3. Have higher bars for general community building funding.
This will disincentivize self-interested non-EAs from coming into the movement for the money. We think some costly signals for EAs are useful and have good selection effects, such as paying out of pocket to go to a conference about helping the world.
- Anecdata: For EAG London 2022, there was an application that most people had to fill to get accepted into the conference. However, close to the conference, there was a link sent by the conference organizers that allowed for people to automatically get accepted, even if they had already been rejected for London 2022. This likely lowered the quality of conversations at the conference, worsening optics overall, and setting a poor precedent.
Counterpoint: There are upsides to having low friction applications for conferences, and it certainly incentivizes a lot of great people to come. Also, making funding applications longer selects against busier people which seems counterproductive, since busyness is somewhat correlated with being more impressive or working on cool projects. Finally, jumping through more hoops could make it harder for value-aligned people to join, and it doesn’t fully dissuade grifters.
4. Diversify community-building funding across cause areas.
Uniform funding incentivizes siloing, and when siloed, people working in different cause areas become more vulnerable to groupthink, echo chambers, and mistrust of the other. This might look like organizing events at your local university with both the AI club and the animal rights advocates.
Counterpoint: There are a lot of benefits when people in the same cause area who have similar amounts of esoteric technical expertise talk to each other about the same problem, and there are informal social structures that also allow for diffusion of ideas between groups.
5. Weigh the costs of elitism more heavily.
Talent search recruitment specifically should be careful about the consequences of narrowing selection pools too early and how this may foster a culture that is more elitist and prone to groupthink. Elitism can turn many promising people away, and can also create a culture of resentment towards the "chosen" few.
Counterpoint: While elitism leaves a bad taste in our mouth, there are undeniable benefits that may outweigh the costs. Prestige is a strong motivating force for talent, and arguably a defining feature of schools like Harvard and Stanford. Thomas Kwa also has a good comment related to this.
6. Weigh the benefits of good community health more heavily.
When evaluating whether funding is justified by cost-benefit (i.e. in instances where optics suffer but it may be worthwhile impact-wise), we should value community health highly, because many programs that pass a naïve cost-benefit will fail when you account for the long-term impacts on community health, culture, and optics; we should also consider the opportunity cost of better-run programs in our analyses.
Counterpoint: It is very difficult to nail down a consistent, quantifiable metric for the value of “good community health” especially when cost-benefit analyses are run by community builders with diverging priorities.
7. Consider alternatives when downside-uncertainty is high.
When the cost-benefit is net positive with a lot of uncertainty (as in the case of the time-money vs. optics-epistemics tradeoff), we should consider alternatives that significantly reduce downsides without significantly hurting upsides.
- For example, if a university group is planning on throwing socials or dinners throughout the year, instead of going to a club or restaurant, consider hosting the party at someone’s house and instead compensating a (non-EA) friend to help cook, or dropping by a coffee shop and then hanging out at a park (or maybe even just hanging out at the park?).
Counterpoint: Alternatives take time to think of, and sometimes they aren’t easily accessible or scalable since each group has their own specific needs. In addition, all decisions will have some level of uncertainty and it’s not clear how to determine how much one should sacrifice for the sake of smaller error bars.
Thanks for writing this post! I like a lot of the recommendations you made as well as the specific examples you point to when talking about concerns. I’m really glad we’re having these conversations and I think this post contributes to it.
Another source of epistemic erosion happens whenever a community gets larger. When you’re just a few people, it’s easier to change your mind. You just tell your friends, hey I think I was wrong.
When you have hundreds of people that believe your past analysis, it gets harder to change your mind. When peoples’ jobs depend on you, it gets even harder. What would happen if someone working in a big EA cause area discovered that they no longer thought that cause area was effective? Would it be easy for them to go public with their doubts?
So I wonder how hard it is to retain the core value of being willing to change your mind. What is an important issue that the “EA consensus” has changed its mind on in the past year?
Thanks, great points (and counterpoints)!
I like this suggestion--what do you imagine this transparency looks like? Do you think, e.g., EA groups should have pages outlining their community-building philosophies on their websites? Should university groups should write public Forum posts about their plans and reasoning before every semester/quarter or academic year? Would you advocate for more community-building roundtables at EAGs? (These are just a few possible example modalities of transparency that just came to my head, very interested in hearing more.)
+1 to transparency!
I would love to see more community builders share their theories of change, even if they are just 1/2 page google docs with a few bullets and links to other articles (and where their opinions differ), and periodically update this (say, every 6 months or so) with major changes, examples of where they were wrong (this is by far the most important to me)
+1
I think GiveWell and OP's early commitment to transparency were admirable, if unusual and time-consuming. Not all groups will go as in-depth, of course, but I think it's usually good when EA leaders and emerging leaders are brave enough to practice their reasoning skills in the real world of their projects, and to show their thinking as it develops.
Forgive me for only skimming this and making a rather off-topic comment, but:
From an outside perspective, how sure are we of this actually? E.g. have organizations and people that generated large positive impact so far adhered to EA-style culture or axioms?
The writing on epistemic erosion reminds me of a weird feeling I left with during an EAGx conference. I've benefitted tremendously from having conversations with experienced safety members at previous EAGs. I think of EAGx as a way to connect with people but with more of an emphasis on giving my own advice and trying to pay it forward with the experience I've accumulated to (even more than myself) relatively junior people interested in AI Safety. I had a lot of great meetings, but a surprising number of meetings left a bittery taste in my mouth.
Possibly I came in with improper expectations, but I expected a lot of discussion around project and research ideas, and general ways to get involved with AI Safety specific research if it was difficult to do at their current uni (through programs, funding opportunities,...) Instead, in many of my meetings, the questions were predominantly of a flavor where the reference class had a lot of overlap with the type of questions that would be asked if one were motivated to reduce Xrisks from AI, but felt distinctively different in that there were undertones of really motivated by prestige/status-seeking for the sake of prestige/status-seeking rather than for that being instrumentally useful on the path to reduce Xrisks from AI. Thoughts on how to get hired at OpenAI, Deepmind, masters programs in subject X at a prestigious university Y, where X wasn't even related to my own background, but I just happened to be at university Y ... Felt kinda bad because a lot of questions felt like things that could've been googled, or seemed very strongly driven by an eagerness to pursue high-prestige opportunities rather than anything about refining ideas on how to contribute to AIS.
Totally possible I'm overreading the vibes of such convos from that conference, but when I imagine what kind of things I'd be curious about (and had been curious about a few years ago) when I wanted to be helpful to AI Safety but unsure how, the kinds of questions and the direction in which I'd expect the conversations to be steered was very different in what I experienced. Just another light anecdota (admittedly highly speculative) on epistemic erosion.
edit*: I had lots of really great conversations where I was really happy I got to talk. The surprise was mostly about the percentage of conversations that gave me that ^ feeling.