Hide table of contents

Community building has recently had a surge in energy and resources. It scales well, it’s high leverage, and you can get involved in it even if six months ago you’d never heard of Effective Altruism. Having seen how hard some people are rowing this boat, I’d like to see if I can’t steer it a bit.

The tl;dr is:

  • Some current approaches to community building especially in student groups are driving away great people
  • These approaches involve optimising for engaging a lot of new people in a way that undermines good epistemics, and trades off against other goals which are harder to measure but equally important (cf Goodhart’s Law)
  • I argue that this is a top priority because the people that these approaches drive away are in many cases the people EA needs the most.

I’ll start by describing why I and several friends of mine did not become EAs.

Then I’ll lay out my sense of what EA needs and what university community building is trying to achieve. I’ll also discuss things that I have encountered in community building that I’ve found troubling.

After that I’ll give my model for how these approaches to community building might be causing serious problems

Finally I’ll explain why I think that this issue in particular needs to be prioritised, and what I think could be done about it.

Part 1 - Reasons I and others did not become an EA

1

I was talking to a friend a little while ago who went to an EA intro talk and is now doing one of 80,000 Hours' recommended career paths, with a top score for direct impact. She’s also one of the most charismatic people I know, and she cares deeply about doing good, with a healthy practical streak.

She’s not an EA, and she’s not going to be. She told me that she likes the concept and the framing, and that since the intro talk she’s often found that when faced with big ethical questions it’s useful to ask “what would an EA do”. But she’s not an EA. Off the back of the intro talk and the general reputation as she perceived it, she got the sense that EA was a bit totalising, like she couldn’t really half-join, so she didn’t. Still, she enjoys discussing the concept of it with me, and she’s curious to hear more about AI.

Certainly there are some professions, like AI safety, where one person going all in is strikingly better than a lot of people who are only partly engaged, but in her area I don’t think this applies. I’ll build on this later.

2

A friend of mine at a different university attended the EA intro fellowship and found it lacking. He tells me that in the first session, foundational arguments were laid out, and he was encouraged to offer criticism. So he did. According to him, the organisers were grateful for the criticism, but didn’t really give him any satisfying replies. They then proceeded to build on the claims about which he remained unconvinced, without ever returning to it or making an effort to find an answer themselves.

He recently described something to me as ‘too EA’. When I pushed him to elaborate, what he meant was something like ‘has the appearance of inviting you to make your own choice but is not-so-subtly trying to push you in a specific direction’.

3

Another friend of mine is currently working on Bayesian statistical inference, but has an offer to work as a quantitative trader. He hopes to donate some of his income to charity. He does not want to donate to EA causes, or follow EA recommendations, and in fact he will pretty freely describe EA as a cult. He has not, as far as I know, attended any EA events. He has already made his mind up.

As far as I can tell, this is the folk wisdom among mathematicians in my university: I’ve heard the rough sentiment expressed several times, usually in response to people saying things like “so what do you guys make of EA?”

4

I have a friend who has just started a career in an EA cause area. She knows about EA because I have told her about it, and because I once gave her a copy of The Precipice. But there’s never really been a way for her to get engaged. Her area of interest is distinctly neartermist, and even though she lives in one of the most densely EA cities in the world, she’s never become aware of any EA events in her area.

Me
When I came to university I had already read a lot of the Sequences and I’d known about effective altruism for years, and even read some of the advice on 80,000 Hours. But upon investigating my local group I was immediately massively put off. The group advertised that I could easily book a time to go on a walk with a committee member who would talk to me about effective altruism and give me career advice, and to me this felt off. Every student society was trying to project warm inviting friendliness, but EA specifically seemed to be trying too hard, and it pattern-matched to things like student religious groups.

I asked around, and quickly stumbled upon some people who confidently told me that EA was an organisation that wanted to trick me into signing away my future income to them in exchange for being part of their gang. The fact that anyone would confidently claim this was enough to completely dissuade me from ever engaging.

Nonetheless, I retained a general interest in the area, and indeed my interest in rationality led me to get to know various older engaged EAs. They never tried to convince me to adopt their values, but they were pretty exemplary in their epistemics, and this made them very interesting to talk to. The groups I floated in were a mix of EAs and non-EAs, but eventually it rubbed off on me. And I’m pretty sure that if I hadn’t encountered EA in university it would have rubbed off a lot sooner.

Part 2 - My concerns with current community building approaches

I have a tentative model for how EA community building could be improved, which I’ve arrived at from a synthesis of two things. The first is my received sense of where EA is currently facing difficulty; the second is things that I have personally found concerning. I’ll lay these out, then present my best guess for what is going wrong in the next section.

Where is EA facing difficulty

As far as I can tell, the most basic account is that EA is talent-constrained: there aren’t enough good people ready to go out there and do things. This yields the most basic account of what EA should be doing: producing more Highly Engaged EAs (HEAs)

But the picture is slightly more complex than that, because in fact there are only some kinds of talent on which EA is constrained. Indeed, openings for EA jobs tend to be massively oversubscribed. So what specific kinds of talented people does EA need more of? Well, the most obvious place to look is the most recent Leader Forum, which gives the following talent gaps (in order):

  • Government and policy experts
  • Management
  • The ability to really figure out what matters most and set the right priorities
  • Skills related to entrepreneurship / founding new organizations
  • One-on-one social skills and emotional intelligence
  • Machine learning / AI technical expertise

As you can see, there are in fact five categories which rank above AI technical expertise. So the question is, if many EA jobs are flooded with applicants, why are we still having trouble with these? What I will go on to claim is that current community building may be selecting against people with some of these talents.

What I have found disconcerting

The most concrete thing is community builders acting in manners which I would consider to be too overtly geared at conversion. For instance, introducing people to EA by reading prepared scripts, and keeping track of students in CRMs. I find this very aversive, and I would guess that a lot of likely candidates for EA entrepreneurs, governmental officials, and creative types would feel similarly.

This point bears repeating because as far as I can tell a lot of community builders just don’t think this is weird. They do not have any intuitive sense that somebody might be less likely to listen to the message of a speech if they know that it’s being read from a script designed to maximise conversion. They are surprised that somebody interested in EA might be unhappy to discover that the committee members have been recording the details of their conversation in a CRM without asking.

But I can personally confirm that I and several other people find this really aversive. One of my friends said he would “run far” if, in almost any context, someone tried to persuade him to join a group by giving a verbatim speech from a script written by someone else. Even if the group seemed to have totally innocuous beliefs, he thought it would smack of deception and manipulation.

Another red flag is the general attitude of persuading rather than explaining. Instead of focusing on creating a space for truth-seeking - learning useful tools and asking important questions - it seems like many community builders see their main job as persuading people of certain important truths and coaxing them into entering certain careers. One admitted to me that, if there were a series of moves they could play to convert a new undergrad into an AI safety researcher or someone working on another job that seems important, they would ‘kind of want to’ play those moves. This is a very different approach from giving exceptional people the ‘EA toolkit’ and helping them along their journey to figuring out how to have the biggest impact they can.

EA may not in fact be a form of Pascal’s Mugging or fanaticism, but if you take certain presentations of longtermism and X-risk seriously, the demands are sufficiently large that it certainly pattern-matches pretty well to these.

And more generally, I find it odd to know that people are being put in charge of student groups who have only known about effective altruism for single digit months. Even if they’re not being directly hired by CEA/OpenPhil, they’re still often being given significant resources and tasked with growing their groups. This is an obvious environment for misalignment to creep in, not through any malice but just through a desire to act quickly without a real grip on what to do.

Part 3 - My model of what is going wrong

My central and most important worry is that activities doing something close to optimising for the number of new HEAs will disproportionately filter out many of the people it’s most valuable to engage. I’ll reiterate the list of things we need more than technical AI expertise:

  • Government and policy experts
  • Management
  • The ability to really figure out what matters most and set the right priorities
  • Skills related to entrepreneurship / founding new organizations
  • One-on-one social skills and emotional intelligence

My impression is that there are some people who will, when presented with the arguments for Effective Altruism, pretty quickly accept them and adopt something approximating the EA mindset and worldview. I think that the people who excel in some of the areas I’ve listed above are significantly less likely to also be the kinds of people who get engaged quickly. I’ll lay my thoughts out in detail, but first let me give an easy example: “The ability to really figure out what matters most and set the right priorities”

People who care a lot about what matters most are likely to be the kinds of people who don’t just go along with arguments. They’ll be the kind that push back, pick holes, and resist attempts to be persuaded. I think it would be tempting to assume that the best of these people will already have intuited the importance of scope sensitivity and existential risk, and that they’ll therefore know to give EA a chance, but that’s not how it works. The community needs to contain people who won’t take the importance of existential risk seriously until they’ve had some time to think hard about it, and it will take more effort to get such people engaged.

If you don’t intentionally encourage the kinds of people who instinctively pick holes in arguments while you’re presenting EA to them for the first time, your student group is not going to produce people who are fantastic at coming up with thoughtful and interesting criticisms. I can point to specific people who I believe have useful criticisms of EA, but who have no interest in getting hired to write them up even if it can be funded, because they just don’t care that much about EA, because when they tried to present criticism early on they were ignored.

I’m going to address the following points in this order:

  1. Noticing the problem is itself hard, but too much focus on creating HEAs will sometimes cause you to miss the most impactful people
  2. A speculative model of things going wrong
  3. If these problems are real, they’re systemic
  4. Scaling makes them worse
  5. The faster your community is growing, the less experienced the majority of members will be.

After that, I will at least try to offer some recommendations.

Noticing the problem is itself hard, but too much focus on creating HEAs will sometimes cause you to miss the most impactful people

I think the basic problem is that firstly we might be failing to consider hard-to-measure factors, and secondly, we might be overweighing easy-to-measure factors. These are of course intimately connected.

On the first point: Zealous community building might sometimes cause big downsides that are really hard to measure. If somebody comes to an intro talk, leaves, and never comes back, you don’t usually find out why. Even if you ask them, you probably can’t put much weight on their answer: they don’t owe you anything and they might quite reasonably be more interested in giving you an answer that makes you leave them alone, even if it’s vague or incomplete. You should expect there to be whole types of reason (like ‘you guys seem way more zealous than I’m comfortable with’) which you’ll be notably less likely to hear about relative to how much people think it, especially if you’re not prioritising getting this kind of feedback.

Even worse, if something about EA switches them off before they even come to the intro talk, you won’t even realise. If something you say in your talk is so bad that it causes someone to go away and start telling all their most promising and altruistic friends that EA is a thinly-veiled cult, you will almost never find out - at least not without prioritising feedback from people who are no longer engaged.

Second, despite some pushback, current EA community building doctrine seems to focus heavily on producing ‘Highly Engaged EAs’ (HEAs). It is relatively easy to tell if someone is a HEA. The less engaged someone is, the harder it is to tell. Unfortunately, sometimes there will be people who will take longer to become HEAs (or perhaps forever), but who will have a higher impact than the median HEA even in proportion to however long it takes them to become however engaged they might become.

I think the model of prioritising HEAs does broadly make sense for something like AI safety: one person actually working on AI safety is worth more than a hundred ML researchers who think AI safety sounds pretty important but not important enough to merit a career change. But elsewhere it’s less clear. Is one EA in government policy worth more than a hundred civil servants who, though not card-carrying EAs, have seriously considered the ideas and are in touch with engaged EAs who can call them up if need be? What about great managers and entrepreneurs?

I don’t actually know the answer here, but what I do know is that the first option - one HEA in a given field - is much easier to measure and point to as evidence of success.

To be really clear, I’m not advocating for an absolute shift in strategy away from HEAs to broader and shallower appeal. What I’m saying is that I don’t think it’s clear-cut, but a focus on measurably increasing the number of HEAs is likely to miss less legible opportunities for impact.

Why can’t people appreciate the deep and subtle mysteries of community building? Well, this is where Goodhart’s Law crops up: a measure that becomes a target to be optimised ceases to be a good measure.

The main way Goodhart’s Law kicks in is that the people setting strategy have a much more nuanced vision than the people executing it. The reason everyone’s pushing for community building, I believe, is that people right in the heart of EA thought about what a more effective and higher-impact EA would look like, and what they pictured was an EA which was much larger and contained many more highly-engaged people. Implicit in that picture were a bunch of other features - strong capacity for coordination, good epistemics, healthy memes, and so on. 

But when that gets distilled down to “community building” and relayed to people who have only been in university for a year or so, quite understandably they don’t spontaneously fill in all the extra details. What they get is “take your enthusiasm for EA, and make other people enthusiastic, and we’ll know you’re doing well if at the end of the year there are more HEAs”.

But often the best way to make more HEAs is not the best way to grow the community!

A speculative model of things going wrong

This is a bit more speculative but I’d like to sketch out a model for how this plays out in a bit more detail. I’d like to conjure up two hypothetical students, Alice and Bob, at their first EA intro fellowship session.

Alice

Alice has a lot of experience with strange ideas. She’s talked to communists, alt-rights, crypto bros, all kinds of people. She’s very used to people coming along with an entirely new perspective on what’s important, and when they set the parameters, she expects them to have arguments she can’t reply to, because she’s an undergrad and they’re cribbing their notes from professors, and sometimes literally reciting arguments off a script. Of course she actually quite likes sitting down and thinking through the problems - she enjoys the intellectual challenge. She knows the world is full of Pascal’s Muggers. She doesn’t know if EAs are muggers, but she knows they like getting people to promise to give away 10% of their income (which sounds to her like a church tithe), and she’s heard they sweep people away on weekend retreats. Still, she can appreciate that if they are right, what they’re doing is important, so she suspends her judgement. 

At the opening session she disputes some of the assumptions, and the facilitators thank her for raising the concerns, but don’t really address them. They then plough on, building on those assumptions. She is unimpressed.

Bob

Bob came to university feeling a bit aimless. He’s not really sure what he wants to do with his life, or how he should even decide. Secretly he’d kind of like it if someone could just tell him what he was meant to do, because sometimes it feels like the world’s in a bad state and he doesn’t really get why or how to fix it. So when he hears the arguments in the opening session he’s blown away. He feels like he’s been handed a golden opportunity: if they’re right, he can be a good person, who does important work, with a close group of friends who all share his goals and values.

Are they right? He’s not sure. He’s never really considered these arguments but they seem very persuasive. And the organisers keep talking about epistemics, and top researchers. If they’re wrong, he’s not even sure how he’d tell, but if they’re right then it’s pretty important that he starts helping out right away. And he kind of wants them to be right. 

If these problems are real, they’re systemic

We should expect that new EAs doing community building will misunderstand high-level goals in systematic ways.

What this means is, it’s not just that some random cluster of promising people will get missed, it’s that certain kinds of promising people will get missed, consistently, and EA as a whole will shift its composition away from those kinds of people. To be clear, this isn't absolute: it’s not that everyone capable of criticism is filtered out, it’s that every group that prioritises producing HEAs will be slightly filtering against it and the effects will compound across the entire community.

If you’ve been told that CEA has hired you as a community builder you because they think that counterfactually this will lead to ten more HEAs, and indeed, you think that it’s really very important to get more HEAs so that there are more people working on the biggest problems, and you meet an Alice and a Bob, well, maybe you’d rather talk to Bob about how to get into community building instead of talking to Alice about alternative foundations to the Rescue Principle.

And maybe this really is the right choice in individual cases. The problem is if it gradually accumulates. Eventually EA as a whole becomes more Bob than Alice, not just in terms of how many people with really fantastic epistemics there are, but also in terms of the epistemic rigour of the median HEA.

Personally the thing I’m most worried about is that this effect starts to wreck EA group epistemics and agency. I’ve seen little traces here and there which have given me concerns, although I don’t yet feel I can confidently claim that this is happening. But I think it’s really really important that we notice if it is, so that we can stop it. And this phenomenon is hard to notice.

Ironically, we should expect community building to tend towards homogeneity because community builders will beget other community builders who find their strategies compelling. And we should expect this to tend towards strategies that are easy to quickly adopt.

There has been some emphasis lately on getting community builders to develop their own ‘inside views’ on important topics like AI safety, partly so that they can then relay these positions with higher fidelity. I welcome this, but I don’t think it’s sufficient to solve the problem of selecting against traits we value. Understanding AI safety better doesn’t stop you from putting people off for any reason other than your understanding of AI safety.

A little while after I first drafted this post, there was a popular forum post entitled “What psychological traits predict interest in effective altruism?” I commend the impulse to research this area but I can very easily picture how it goes wrong, because while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made up of such people would automatically be better than this one.

Scaling makes them worse

It might now occur to you that not everyone joins EA through student groups. Some people come from LessWrong, some people see a TED Talk, some people just stumble across the forum. It’s true, and these will all be filtering in different kinds of people with different interests and values.

As the community changes, the way it grows will change. The thing to avoid is a feedback loop that sends you spiralling somewhere weird. Unfortunately this is exactly what you encourage when you try to scale things up. The easier something is to scale, well, the faster you’ll scale it. 

If you have a way of community building which produces ten HEAs in a year, two of which will be able to follow in your footsteps, you will very quickly be responsible for the majority of EA growth. The closer a student organiser is to creating the maximum number of HEAs possible, the more likely they are to be Goodharting: trading away something else of value for more HEAs.

And bear in mind: the faster you’re growing, the newer the median member of the community will be. If EA doubled in size every year then half of EAs would only have been EAs for a year. And if any portion of EA managed to crack a way of doubling in size every year, it would very quickly make up the majority of the community.

The faster your community is growing, the less experienced the majority of members will be.

Concretely, I worry that university groups risk instantiating this pattern. The turnover is quick, and the potential for rapid growth and scaling is a big selling point.

I imagine that older EA groups will have had both time to grow and time to consider downside risks. They’ll have more experienced members who can be more careful, and also less of a pressure to expand. On the other hand, newer groups will be saddled with both less experience and more desire and opportunity to scale up quickly.

It’s also generally hard, as someone with experience, to remember what it was like being inexperienced, and what was or wasn’t obvious to you. It’s easy to assume that people understand all the subtext and implications of your claims far more than they actually do. We need to actively resist this when dealing with newer, more inexperienced community builders.

Part 4 - Why to prioritise this problem, and what to do about it

You might think that, while this problem exists, it’s not worth focusing resources on it because it’s not as high-priority as problems like AI safety research. If better epistemics trades off against getting more alignment researchers, maybe you think it’s not worth doing. However, it’s not clear at all that this is the case.

First, AI Safety researcher impact is long-tailed, and I claim that the people on the long tail all have really unusually good epistemics, such that trading against good epistemics in favour of more AI safety researchers risks trading the best for the worst.

Second, most groups in history have been wrong about some significant things, including groups that really wanted to find the truth, like scientists in various fields. So, our strong outside view should be that, either at the level of cause prioritisation or within causes, we are wrong about some significant things. If it’s also sufficiently likely that some people could figure this out and put us on a better path, then it seems really bad that we might be putting off those very people.

Third, imagine a world in which EA student groups are indeed significantly selecting against traits we value. Ask yourself if, in this world, things might look roughly as they do now. I think they might. It’s easy to let motivated reasoning slip in when one wants to avoid acknowledging a tradeoff - for example, I’ve often told myself I have time to do everything I want to do, when in fact I don’t. This problem could be happening, and you might think this problem isn’t happening even if it is! Until we spend some resources getting information, our uncertainty about whether / how badly the problem is happening should push us towards prioritising the problem. (It might be really bad, and we don’t yet know if it is!) If we later found out that it wasn’t a big deal, we could deprioritise it again.

For all the same reasons that doing community building is important, it is really important to do it right.

So what do you do about all this?

It’s probably not enough just to acknowledge that it might be a problem if you don’t prioritise it. It’s also not enough (though it may be useful) to select for virtuous traits when choosing, for example, your intro fellows. Even if you do this, you will still miss out on anyone who is put off after the selection process, or who doesn’t even apply because they’ve heard that EA is a weird cult.

Honestly, it’s hard. I have to admit the limits of my own knowledge here: I don’t know what constraints community builders are acting under, or what the right balance between these factors is. Moreover, the issue I’m pointing to is, in the most general terms, that there are lots of hard-to-measure things which people might not be properly measuring. It’d be very easy, I suspect, to read this post and think “Look at all these other factors I hadn’t considered! Well, I’d better start considering them,” and move on, when in fact what you need to do is one meta-level up: start looking for illegible issues, and factors that nobody else has even considered yet.

So, now that you know that I don’t have all the answers, and that literally following my advice as written will only sort of help, here is my advice. 
 

  • Don’t actually think in terms of producing more HEAs. Yes, good community building will lead to more HEAs, but producing more HEAs is not enough to make what you’re doing good community building.
    • If you’re high-status within EA, think carefully about how you react to community builders who seem to create lots of HEAs. Don’t just praise them, but also coach them and monitor them. The more HEAs created, the more you should be suspicious of Goodharting (despite the best of intentions), so work together to avoid it.
  • Consider the downside risks from activities you’re running.
    • A useful framework is to seriously consider what types of people might be put off / selected against by an activity, and a good list of types to start with is the ones the Leaders Forum says EA needs.
    • Adopt the rule of thumb: ‘If many people would find it creepy if they knew we were doing x, don’t do x.’
      • Notice that many EA community builders seem to have different norms from other students in this regard. Especially if you didn’t think things like reading pre-scripted persuasive speeches or recording details from 1-on-1s in a CRM without asking seemed sinister, default to asking a handful of (non-EA) friends what they think before introducing a new initiative.
    • It might be helpful for there to be a community-building Red Team organisation, which could scrutinise both central strategies (e.g. from CEA or OpenPhil) and the activities of individual student groups.

 

  • Assume that people find you more authoritative, important, and hard-to-criticise than you think you are. It’s usually not enough to be open to criticism - you have to actually seek it out or visibly reward it in front of other potential critics.
    • Maybe try things like giving pizza to intro fellows who left in exchange for feedback.
    • You want good feedback from everyone, not just those who you thought would be highly impactful, since it’s easy for someone to be put off EA based on the message they hear for any other person, whether or not that person has potential for high impact.
      • No-one seems sure how much low-fidelity / misleading messages about EA are being spread. It would be great (and at least partly tractable) to research this.

 

  • Be open to changing your mind. I know this is kind of overplayed, but there’s a whole sequence on it, and it’s pretty good. Remember that the marginal value of another HEA is way lower than the marginal value of an actual legitimate criticism of EA nobody else has considered yet.
    • Seriously consider adding more events geared towards presentation of ideas than persuasion.
       
  • Don’t offer people things they want in exchange for self-identification and value adoption. Free pizza for showing up to a discussion group is fine, but if people feel like they’ll get respect and a friendship group only if they go around saying “AI safety seems like a big deal”, then that will be why some of them go around saying “AI safety seems like a big deal”.

 

  • Message me. I’ll try to reply to comments and messages. It’s hard for me to predict in advance what parts of this will or won’t be clear, so I invite you to tell me what doesn’t make sense.

 

  • Read these articles, if you feel so inclined (ranked from most to least useful in my opinion):

https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs

https://www.lesswrong.com/posts/L32LHWzy9FzSDazEg/motivated-stopping-and-motivated-continuation

https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes

https://meaningness.com/metablog/upgrade-your-cargo-cult

https://meaningness.com/geeks-mops-sociopaths

http://benjaminrosshoffman.com/construction-beacons/


 

Comments133
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have been community building in Cambridge UK in some way or another since 2015, and have shared many of these concerns for some time now. Thanks so much for writing them much more eloquently than I would have been able to, thanks!

To add some more anecdotal data, I also hear the 'cult' criticism all the time. In terms of getting feedback from people who walk away from us: this year, an affiliated (but non-EA), problem-specific table coincidentally ended up positioned downstream of the EA table at a freshers' fair. We anecdotally overheard approx 10 groups of 3 people discussing that they thought EA was a cult, after they had bounced from our EA table. Probably around 2000-3000 people passed through, so this is only 1-2% of people we overheard.

I managed to dig into these criticisms a little with a couple of friends-of-friends outside of EA, and got a couple of common pieces of feedback which it's worth adding.

  • We are giving away many free books lavishly. They are written by longstanding members of the community. These feel like doctrine, to some outside of the community.
  • Being a member of the EA community is all or nothing. My best guess is we haven't thought of anything less intensi
... (read more)

In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity.

The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.

If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:

  • “Too much” spending
  • billionaire funding/asking people to donate income
  • most “epistemic issues”, especially with success in multiple cause areas

If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.

6
Joe Collman
I broadly agree with this, but at least with AI safety there's a Goodharting issue: we don't want AIS researchers optimising for legibly impressive ideas/results/writeups. I assume there's a similar-in-principle issue for most cause areas, but it does seem markedly worse for AIS. (given lack of meaningful feedback on the most important issues) There's a significant downside even in having some proportion of EA AIS researchers focus on more legible results: it gives a warped impression of useful AIS research to outsiders. This happens by default, since there are many incentives to pick a legibly impressive line of research, and there'll be more engagement with more readable content. None of this is to say that I know e.g. MIRI-style research to be the right approach. However, I do think we need to be careful not to optimise for the appearance of strong object level work.
2
nananana.nananana.heyhey.anon
I agree and think this is an argument for investing in cause specific groups rather than generalized community building.
[anonymous]49
1
0

When I was working for EA London in 2018, we also had someone tell us that the free books thing made us look like a cult and they made the comparison with free Bibles.

One option here could be to lend books instead. Some advantages:

  • Implies that when you're done reading the book you don't need it anymore, as opposed to a religious text which you keep and reference.

  • While the distributors won't get all the books back (and that's fine) the books they do get back they can lend out again.

  • Less lavish, both in appearance and in reality.

This is what we do at our meetups in Boston.

7
Adam Binksmith
It's also a nice nudge for people to read the books (I remember reading Doing Good Better in a couple of weeks because a friend/organiser had lent it to me and I didn't want to keep him waiting).

I believe that EA could tone down the free books by 5-10% but I am pretty skeptical that the books program is super overboard.

I have  50+ books I've gotten at events over the past few years (when I was in college), mostly  politics/econ/phil stuff  the complete works of John Stuart Mill and Adam Smith, Myth of the Rational Voter, Elephant in the Brain, Three Languages of Politics, etc (all physical books). Bill Gates' book has been given out as a free PDF recently.

So I don't think EA is a major outlier here. I also like that there are some slightly less "EA books" in the mix like the Scout Mindset and The AI Does Not Hate You.

I think it's not free books per se, but free books related to phrases "here's what's really important", "this is how to think about morality" that are problematic in the context of the Bible comparison

I'm not sure what campus EA practices are like - but, in between pamphlets and books, there are zines. Low-budget, high-nonconformity, high-persuasion. Easy for students to write their own, or make personal variations, instead of treating like official doctrine. ie, https://azinelibrary.org/zines/

[anonymous]15
0
0

Nice. And when it comes to links, ~half the time I'll send someone a link to the Wikipedia page on EA or longtermism rather than something written internally.

The criticisms of EA movement building tactics that we hear are not necessarily the ones that are most relevant to our movement goals. Specifically, I’m hesitant to update much on a few 18 year olds who decide we’re a “cult” after a few minutes of casual observation at a fresher’s fair. I wouldn’t want to be part of a movement that eschewed useful tools for better-integrating its community because it’s afraid of the perception of a few sarcastic teenagers.

Instead, I’m interested in learning about the critiques of EA put forth by highly-engaged EAs, non-EAs, semi-EAs, and ex-EAs who care about or share at least some of our movement goals, have given them a lot of thought, are generally capable people, and have decided that participation in the EA movement is therefore not for them.

I made this comment with the assumption that some of these people could have extremely valuable skills to offer to the problems this community cares about. These are students at a top uni in the UK for sciences, and many of whom go on to be significantly  influential in politics and business, much higher than the base rate at other unis or average population.

I agree not every student fits this category, or is someone who will ever be inclined towards EA ideas. However I don't know if we are claiming that being in this category (e.g. being in the top N% at Cambridge) correlates with a more positive baseline-impression of EA community building? Maybe the more conscientious people weren't ringleaders in making the comments, but they will definitely hear them which I think could have social effects.

I agree that EA will not be for everyone, and we should seek good intellectual critiques from those people that disagree on an intellectual basis. But to me the thrust of this post (and the phenomenon I was commenting on) was: there are many people with the ability to solve the worlds biggest problems. It would be a shame to lose their inclination purely due to our CB strategies. If our... (read more)

But to me the thrust of this post (and the phenomenon I was commenting on) was: there are many people with the ability to solve the worlds biggest problems. It would be a shame to lose their inclination purely due to our CB strategies. If our strategy could be nudged to achieve better impressions at people's first encounter with EA, we could capture more of this talent and direct them to the world's biggest problems.

Another way of stating this is that we want to avoid misdirecting talent away  from the world's biggest problems. This might occur if EA has identified those problems, effectively motivates its high-aptitude members to work on them, but fails to recruit the maximum number of high-aptitude members, due to CB strategies optimized for attracting larger numbers of low-aptitude members.

This is clearly a possible failure mode for EA.

The epistemic thrust of the OP is that we may be missing out on information that would allow us to determine whether or not this is so, largely due to selection and streetlamp effects.

Anecdata is a useful starting place for addressing this concern. My objective in my comment above is to point out that this is, in the end, just anecdata, and t... (read more)

A friendly hello from your local persuasion-resistant  moderately EA-skeptic hole-picker :)

4
Geoffrey Irving
Nice to see you here, Ferenc! We’ve talked before when I was at OpenAI and you Twitter, and always happy to chat if you’re pondering safety things these days.

Hi, thank you for starting this conversation! I am an EA outsider, so I hope my anecdata is relevant to the topic. (This is my first post on the forums.) I found my way to this post during an EA rabbit hole after signing up for the "Intro to EA" Virtual Program.

To provide some context, I heard about EA a few years ago from my significant other. I was/am very receptive to EA principles and spent several weeks browsing through various EA resources/material after we first met. However, EA remained in my periphery for around three years until I committed to giving EA a fair shake several weeks ago. This is why I decided to sign up for the VP.

I'm mid-career instead of enrolled in university, so my perspective is not wholly within the scope of the original post. However, I like to think that I have many qualities the EA community would like to attract:

  • I (dramatically) changed careers to pursue a role with a more significant positive impact and continue to explore how I can apply myself to do the "most good".
  • I'm well-educated (1 bachelor's degree & 2 master's degrees)
  • As a scientist for many years, I value evidence-based decision-making and rationalit
... (read more)
3
New Guy
While the post and this comment are now both ancient, I feel compelled to at least leave a short note here after reading them. My background is in many ways similar to Sarah's and I've came into the contact with the EA community about half a year ago. Unfortunately, 2.5 years later, most of the points raised here resonate heavily with my experiences. Especially the hive mentality, heavy focus on students (with little efforts towards professionals) and overemphasis on AI safety (or more generally - highly-specialized cause areas overshadowing the overall philosophy). I don't know what the solutions are but the problem seems to be still present.

Thanks so much for sharing your thoughts in such detail here :)

Thank you for raising this issue. You are in your 30s, I am in my 50s and I am part way through the Intro to EA program. If you can feel an outsider at 30 something, imagine how it might be for a 50 something.  

These are briefly my thoughts 

  1. There is such a predominance of youth, there is a sense that much of this has not been thought about before and therefore my lived experience has not much merit.  Yet I have lived the life of an EA even if it had no name. 
  2. There is a a certain complacency in the idea that EA is using science for decision making   (I noted Toby Ord's  reference to that in a talk ) without perhaps remembering that scientists are simply biased humans too. Galton was a much lauded academic statistician but perfected eugenics.  
  3. I have a bias here as someone whose neurodiversity means I have significant issues with mathematical concepts but yet managed to understand the excess risk taken in the City in 2006. I left my legal role as I was exhausted defending the spread of the much praised skills of hedge funders etc. I remain convinced that there is a substantial failure to admit that pure human behaviours are very strong over-rulers.
... (read more)
7
Max_Daniel
Thanks so much for sharing your perspective in such detail! Just dropping a quick comment to say you might be interested in this post on EA for mid-career people by my former colleague Ben Snodin if you haven't seen it. I believe that he and collaborators are also considering launching a small project in this space.
4
Sarah Reed
Thanks for the lead! The post you linked seems perfectly suited to me. I'll also contact Ben Snodin to inquire about what he may be working on around this matter.
3
Linch
For onlookers, there's also a website by Ben (my coworker) and Claire Boine. 

Hey Theo - I’m James from the Global Challenges Project :)

Thanks so much for taking the time to write this - we need to think hard about how to do movement building right, and its great for people like you to flag what you think is going wrong and what you see as pushing people away.

Here’s my attempt to respond to your worries with my thoughts on what’s happening!

First of all, just to check my understanding, this is my attempt to summarise the main points in your post:

My summary of your main points

We’re missing out on great people as a result of how community building is going at student groups. A stronger version of this claim would be that current CB may be selecting against people who could most contribute to current talent bottlenecks. You mention 4 patterns that are pushing people away:

  1. EA comes across as totalising and too demanding, which pushes away people who could nevertheless contribute to pressing cause areas. (Part 1.1)
  2. Organisers come across as trying to push particular conclusions to complex questions in a way that is disingenuous and also epistemically unjustified. (Part 1.2)
  3. EA comes across as cult-like; primarily through appearing to be trying to hard to be persuasiv
... (read more)

Thanks for this post. If true, it does describe a pretty serious concern. 

One issue I've always had with the "highly engaged EA" metric is that it's only a measure for alignment,* but the people who are most impactful within EA have both high alignment and high competence. If your recruitment selects only on alignment this suggests we're at best neutral to competence and at worst (as this post describes) actively selecting against competence. 

(I do think the elite university setting mitigates this harm somewhat, e.g. 25th percentile MIT students still aren't stupid in absolute terms). 

That said, I think the student group organizers I recently talked to are usually extremely aware of this distinction. (I've talked to a subset of student group organizers from Stanford, MIT, Harvard (though less granularity), UPenn (only one) and Columbia, in case this is helpful). And they tend to operationalize their targets more in terms of people who do good EA research, jobs, and exciting entrepreneurship projects, rather than in terms of just engagement/identification. Though I could be wrong about what they care about in general (as opposed to just when talking with me).

The pet t... (read more)

7
Mart_Korz
Regarding "Pascal's Mugging":  I am not the author, so I might well be mistaken. But I think I can relate to the intended meaning more closely than "vaguely shady" One paragraph is which I read as: "Pascal's mugging" describes a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities. I think that this in itself need not be problematic (there can be huge stakes which warrant change in behaviour), but if there is social pressure involved in forcing people to accept the premise of huge moral stakes, things become problematic. One example is the "child drowning in a pond" thought experiment. It does introduce large moral stakes (the resources you use for conveniences in everyday life could in fact be used to help people in urgent need; and in the thought experiment itself you would decide that the latter is more important) and can be used to imply significant behavioural changes (putting a large fraction of one's resources to helping worse-off people). If this argument is presented with strong social pressure to not voice objections, that would be a situation which fits under Pascal-mugging in my understanding.   If people are used to this type of rhetorical move, they will become wary as soon as anything along the lines of "there are huge moral stakes which you are currently ignoring and you should completely change your life-goals" is mentioned to them. Assuming this, I think the worry that    makes a lot of sense.
7
Linch
Thanks a lot for the explanation! It does make more sense in context of the text, though to be clear this is extremely far from the original meaning of the phrase, and also the phrase has very negative connotations in our community. So I'd prefer it if future community members don't use "Pascal's mugging" to mean "a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities," unless maybe it's locally-scoped and clearly defined in the text to mean something that does not have the original technical meaning.  It is unfortunate that I can't think of a better term on the top of my head for this concept however, would be interested in good suggestions.
2
Tessa A 🔸
What is the definition you'd prefer people to stick to? Something like "being pushed into actions that have a very low probability of producing value, because the reward would be extremely high in the unlikely event they did work out"? The Drowning Child argument doesn't seem like an example of Pascal's Mugging, but Wikipedia gives the example of: and I think recent posts like The AI Messiah are gesturing at something like that (see, even, this video from the comments on that post: Is AI Safety a Pascal's Mugging?).
3
Linch
Yes this is the definition I would prefer.  I haven't watched the video, but I assumed it's going to say "AI Safety is not a Pascal's Mugging because the probability of AI x-risk is nontrivially high." So someone who comes into the video with the assumption that AI risk is a clear Pascal's Mugging since they view it as "a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities" would be pretty unhappy with the video and think that there was a bait-and-switch. 
0
Arepo
I'm not sure the most impactful people need have high alignment. We've disagreed about Elon Musk in the past, but I still think he's a better candidate for the world's most counterfactually positive human than anyone else I can think of. Bill Gates is similarly important and similarly kinda-but-conspicuously-not-explicitly aligned.

Yes, if you rank all humans by counterfactual positive impact, most of them are not EA, because most humans are not EAs. 

This is even more true if you are mostly selecting on people who were around long before EA started, or if you go by ex post rather than ex ante counterfactual impact (how much credit should we give to Bill Gates' grandmother?)

(I'm probably just rehashing an old debate, but also Elon Musk is in the top 5-10 of contenders for "most likely to destroy the world," so that's at least some consideration against him specifically).

3
Arepo
I don't think background rate is relevant here. I was contesting your claim that 'the people who are most impactful within EA have both high alignment and high competence'. It depends on what you mean 'within EA' I guess. If you mean 'people who openly espouse EA ideas', then the 'high alignment' seems uninterestingly true almost by definition. If you mean 'people who are doing altruistic work effectively' then  Gates and Musk are , IMO, strong enough counterpoints to falsify the claim.
2
Linch
There are many/most people who openly espouse EA ideas who I do not consider highly aligned. 

I feel a desire to lower some expectations:

  • I don't think any social movement of real size or influence has ever avoided drawing some skepticism, mockery, or even suspicion,
  • I think community builders should have a solid and detailed enough understanding of EA received wisdom to be able to lay out the case for our recommendations in a reasonably credible way, but I don't think it's reasonable to expect them to be domain experts in every domain, and that means that sometimes they aren't going to be able to seem impressive to every domain expert that comes to us.
  • To be frank, it isn't realistic to be able to capture the imagination of everyone who seems promising even if we make the best possible versions of our arguments. Some people will inevitably come away thinking we "just don't get it", that we haven't addressed their objections, that we're not serious about [specific concern X] and therefore our point of view is uninteresting. Communication channels just aren't high-fidelity enough, and people's engagement heuristics aren't precise enough, to avoid this happening from time to time.
  • When some people are weirded out by the way we behave or try to attract new members, it seems t
... (read more)

I think it will be really important for EAs to engage in more empirical work to understand how people think about EA. Of course you don't want people to feel like they're being fed the results of a script tested by a focus group (that's the whole point of this post), but you do want to actually know in reliable ways how bad some of these problems are, how things are resonating, and how to do better in a genuine and authentic way. Empirical results should be a big part of this (though not all of it), but right now they aren't, and this seems bad. Instead, we frequently confuse "what my immediate friends in my immediate network think about EA" with "what everyone thinks about EA" and I think this is a mistake.

This is something Rethink Priorities is working on this year, though we invite others to do similar work. I think there's a lot we can learn!

2[anonymous]
Strongly agree with this take. There's nothing stopping us from getting empirical data here and I think we have no strong reason to expect our personal experiences to generalise or that models we create that aren't therotietrically or empirically grounded to be correct. 
1
nananana.nananana.heyhey.anon
I agree with you, and I think this somewhat supports the OPs concern. Are most uni groups capable of producing or critiquing empirical work about their group, or about EA or about their cause areas of choice? Are they incentivized to do so at all? Sometimes yes, but mostly no.

Thank you for writing this. I worry a lot about university groups being led by inexperienced people who have only heard of EA recently, especially given the huge focus on university groups (so, so much more focus than on regional groups or professional groups)! EA seems to be really banking on universities**, so much so that we are kinda screwed if it is done poorly, and turning people off. Some thoughts and theories:

1. Experience of organizers:

I bet the mentorship and training in the new University Group Accelerator Program will help, but also I am not sure how much time a mentor will have, and that still assumes only 25 hours of engaging with EA content. From the website:

"The program is designed for groups that... have at least two interested organizers where... at least one has engaged with high-quality EA ideas for at least 25 hours (e.g. completed an intro fellowship or equivalent) and is comfortable facilitating group discussions or could be with training"

I realize a low amount of hours is a given for this role if you want it to happen at all, but still. That could be enough for someone who is a natural conversationalist to integrate a lot of key lessons and have a deep ... (read more)

[anonymous]26
0
0

| Separation from friends and loved ones: Happens accidentally due to value changes.

I hope by this you mean something like "People in general tend to feel a bit more distant from friends when they realise they have different values and EA values are no exception." But if you've actually noticed much more substantial separation tending to happen, I personally think this is something we should push back against, even if it does happen accidentally. Not just for optics' sake ("Mentioning other people and commitments in your life other than EA might go a long way"), but for not feeling socially/professionally/spiritually dependent on one community, for avoiding groupthink, for not feeling pressure to make sacrifices beyond your 'stretch zone.'

Hi Ivy,

Just wanted to hop in re: the University Group Accelerator program. You are definitely hitting on some key points that we have been strategizing around for UGAP. I just want to clarify a few things:

  • * We see 25 hours as the minimum amount of time engaging with EA ideas before someone should help start a group. Often times we think it should be more but there have been cases of really great organizers springing up after just an intro fellowship. We have additional screening for UGAP groups beyond just meeting the pre-requisites that dive a bit more into the nuances you mentioned around what high-quality content is.
     
  • * UGAP has been very much in beta mode but we are hoping to share the training materials from the upcoming round. :)  We would be excited to have people red-team these once they are presentable.
5
Ivy Mazzola
Thanks for responding! I'm actually super excited about UGAP and have already recommended the program to student organizers now that your applications are open (applications are open, people!). I do note that the 25 hour time commitment is for "at least one organizer", but I also think mentoring will go a long way to make those 25+/- hours count for more. That's great that you do interviews to determine quality and you clarify what quality content is. Excited to see what comes of it :)
1
nananana.nananana.heyhey.anon
Re: “there have been cases of really great organizers springing up after just an intro fellowship.” I definitely believe this can happen and am glad you allow for that. What makes someone seem really great — epistemics, alignment/buy-in, skill in a relevant area of study, __?
8
Chris Leong
There are multiple reasons for the focus on student outreach: * Students are early on in their careers. You are much more likely to be able to affect their trajectory because a) they are often still deciding (and may even seek out your advice!) b) they lack sunk cost c) they have access to low-cost opportunities like internships to try out various paths. * Students have large amounts of free time and the enthusiasm/energy of youth. If an aspect of EA sounds interesting to them, they are more likely to read about it. They have more time to volunteer and more time to invest in skilling up. * Top schools provide an opportunity to connect with people at a certain level of talent. These people are much harder to access later in their careers, both because they are busier, but also because they are distributed at many different companies instead of all concentrated on a few campuses. Beyond this, attending events is so much easier as a student and schools have, for instance, O-Days where societies can recruit members. * Besides these theoretical reasons, I expect CEA is basing this on experience and looking at the highest performers in EA and how they became involved in EA. See, for example, this post which notes: Obviously, that's cherry-picked, but it's still illustrative of how impactful uni group organising can be.

I am aware of the reasons, and I still think it has been focused on to the neglect of other things. Perhaps I should have said extreme focus instead. Maybe that is budget consciousness (uni groups have in the past been run by free and cheap volunteers), but it doesn't seem that should have been a strict consideration for a couple of years now. I'm not saying student groups aren't good but that given bottlenecks and given CEA's limited bandwidth, I don't think it warrants the extreme focus and bullishness I see from many these days, to, I can only assume, the detriment of other programs and other experimentation. Almost all of those students will still be recommended to enter regular careers and gain career capital before they can be competitive for doing direct work, and it is unclear how many students from these groups are even going for direct work on longtermist areas. I think perspectives here might depend on AGI timelines.

Let me also clarify that I am talking about uni groups, as opposed to targeted skilling-up programs hosted at universities. I'm also guessing that that 2015 stanford group was a lot different than the uni groups today. 8 week intro fellowships didn't exist then

7
tamgent
So from the perspective of the recruiting party these reasons make sense. From the perspective of a critical outsider, these very same reasons can look bad (and are genuine reasons to mistrust the group that is recruiting): - easier to manipulate their trajectory - easier to exploit their labour - free selection, build on top of/continue rich get richer effects of 'talented' people - let's apply a supervised learning approach to high impact people acquisition, the training data biases won't affect it
3
Chris Leong
Well, haters are gonna hate. Maybe that's too blase, but as long as we are talking about university groups rather than high schools, the PR risks don't feel too substantial.
8
N N
A small thing, but citing a particular person seems less culty to  me than saying "some well-respected figures think X because Y". Having a community orthodoxy seems like worse optics than valuing the opinions of specific named people. 
7
Ivy Mazzola
Tbh I've had success with this approach. Usually, someone will say "like who?" and then I get to rattle off some names with a clause-length bio without making their eyes glaze over, because they proactively requested the information. Other times they won't ask because they are more interested in the overall point than who thinks it anyway, and they probably already trsut me by that point. Sometimes I'd actually have to google anyway "well I know one was the head of this org and one was the author of this book, let me look those up" and then people are like "whatever whatever I believe you." It is the ideas that matter anyway In general, I think it is good to talk casually, and this kind of wording is very natural for me with the benefit that I don't screw up my train of thought trying to remember names then anyway. If it isn't natural for you (and I guess for many EAs it won't be, now that you mention it) don't do it
1
Florence
I think she is suggesting that only reading up about one person's thoughts and treating it like gospel is cult-like and bad, then sharing that singular view gives off cult-like impressions (understandably). Rather, being more open to learning many different people's views, forming your own nuanced opinion, and then sharing that is far more valuable both intrinsically and extrinsically!  I think it's pretty clear you shouldn't be saying "some well-respected figures think X because Y" regardless, that's like 101 bad epistemics because it's not referencable and vague.  
4
projectionconfusion
The focus on student groups is also inherently redflaggy for some people, as it can be viewed as looking for people who have less scepticism and experience.

I've been speaking to a number of people in university organizing groups who have been aware of these issues, and almost across the board the major issue they feel is that it seems too conflict-generating/bad/guilt-inducing to essentially tell their friends and peers in their or other universities something like "Hey, I think the thing you're doing is actually causing a lot of harm, actually."

I would be very in favor of helping find ways to facilitate better communication between these groups that specifically targets ways they can improve in non-blaming, pro-social and supportive ways.

[anonymous]56
1
0

I wonder if the suggestion here to replace some student reading groups with working groups might go some way to demonstrating that EA is a question.

I don't even think the main aim should be to produce novel work (as suggested in that post); I'm just thinking about having students practice using the relevant tools/resources to form their own conclusions. You could mentor individuals through their own minimal-trust investigations. Or run fact-checking groups that check both EA and non-EA content (which hopefully shows that EA content compares pretty well but isn't perfect...and if it doesn't compare pretty well, that's very useful to know!)

This feels much closer to how I experienced EA student groups 5-7yrs ago - e.g. Tom and Jacob did exactly this with the Oxford Prioritisation Project, and wrote up a very detailed evaluation of it. 

4[anonymous]
Aye and EA London did a smaller version of something in this space focused on equality and justice.

My first thought on reading this suggestion for working groups was "That's a great idea, I'd really support someone trying to set that up!"

My second thought was "I would absolutely not have wanted to do that as a student. Where would I even begin?"

My third thought was that even if you did organise a group of people to try implementing the frameworks of EA to build some recommendations from scratch, this will never compare to the research done by long-standing organisations that dedicate many experienced people's working lives to finding the answers. The conclusion of the project would surely be a sort of verbal participation medal, but you're best off looking at GiveWell's charities anyway. 

Maybe I'm being overly cynical here. It seems a good way to engage people who could later develop into strong priorities/charity evaluation researchers. I suspect it's best that any such initiative be administered by people already working to a high standard in those fields for that benefit to be properly reaped, however.

[anonymous]10
0
0

Agreed, hence "I don't even think the main aim should be to produce novel work". Imagine something between a Giving Game and producing GiveWell-standard work (much closer to the Giving Game end). Like the Model United Nations idea - it's just practice.

6
Max Clarke
I've been very keen to run "deep dives" where we do independent research on some topic, with the aim that the group as a whole ends up with significantly more expertise than at the start. I've proposed doing this with my group, but people are disappointingly unreceptive to it, mainly because of the time commitment and "boringness".
1[anonymous]
Maybe you want to select for the kind of people who don't find it too boring! My guess, though, is that the project idea as currently stated is actually a bit too boring for even most of the people that you'd be trying to reach. And I guess groups aren't keen to throw money at trying to make it more fun/prestigious in the current climate... I've updated away from thinking this is a good idea a little bit, but would still be keen to see several groups try it.
1
Max Clarke
No no, I still believe it's a great idea. It just needs people to want to do it, and I was just sharing my observation that there doesn't seem to be that many people who want it enough to offset other things in their life (everyone is always busy). Your comment about "selecting for people who don't find it boring" is a good re-framing, I like it.
1[anonymous]
Oh yes I know - with my reply I was (confusingly) addressing the unreceptive people more than I was addressing you. I'm glad that you're keen :-)
1
nananana.nananana.heyhey.anon
Strong +1. This feels much more like the correct use of student groups to me.

This is a great post! Upvoted. I appreciate the exceptionally clear writing and the wealth of examples, even if I'm about 50/50 on agreeing with your specific points.

I haven't been involved in university community building for a long time, and don't have enough data on current strategies to respond comprehensively. Instead, a few scattered thoughts:

I was talking to a friend a little while ago who went to an EA intro talk and is now doing one of 80,000 Hours' recommended career paths, with a top score for direct impact. She’s also one of the most charismatic people I know, and she cares deeply about doing good, with a healthy practical streak.

She’s not an EA, and she’s not going to be. She told me that she likes the concept and the framing, and that since the intro talk she’s often found that when faced with big ethical questions it’s useful to ask “what would an EA do”. But she’s not an EA.

I don't like using "EA" as a noun. But if we do want to refer to some people as "EAs", I think your friend has the most important characteristics described by that term.

Using EA's core ideas as a factor in big decisions + caring a lot about doing good + strong practical bent + working on promisin... (read more)

Minor elaboration on your last point: a piece of advice I got from someone who did psychological research on how to solicit criticism was to try to brainstorm someone's most likely criticism of you would be, and then offer that up when requesting criticism, as this is a credible indication that you're open to it. Examples:

  • "Hey, do you have any critical feedback on the last discussion I ran? I talked a lot about AI stuff, but I know that can be kind of alienating for people who have more interest in political action than technology development... Does that seem right? Is there other stuff I'm missing?"
  • "Hey, I'm looking for criticism on my leadership of this group. One thing I was worried about is that I make time for 1:1s with new members, but not so much with people that have been in the group for more than one year..."
  • "Did you think there was there anything off about our booth last week? I was noticing we were the only group handing out free books, maybe that looked weird. Did you notice anything else?"
5
nananana.nananana.heyhey.anon
Appreciate your comments, Aaron. You say: But I am confident that leaders' true desire is "find people who have great epistemics [and are somewhat aligned]", not "find people who are extremely aligned [and have okay epistemics]". I think that’s true for a lot of hires. But does that hold equally true when you think of hiring community builders specifically? In my experience (5 ish people), leaders’ epistemic criteria seem less stringent for community building. Familiarity with EA, friendliness, and productivity seemed more salient.
8
Aaron Gertler 🔸
This is a tricky question to answer, and there's some validity to your perspective here.  I was speaking too broadly when I said there were "rare exceptions" when epistemics weren't the top consideration. Imagine three people applying to jobs: * Alice: 3/5 friendliness, 3/5 productivity, 5/5 epistemics * Bob: 5/5 friendliness, 3/5 productivity, 3/5 epistemics * Carol: 3/5 friendliness, 5/5 productivity, 3/5 epistemics I could imagine Bob beating Alice for a "build a new group" role (though I think many CB people would prefer Alice), because friendliness is so crucial.  I could imagine Carol beating Alice for an ops role. But if I were applying to a wide range of positions in EA and had to pick one trait to max out on my character sheet, I'd choose "epistemics" if my goal were to stand out in a bunch of different interview processes and end up with at least one job.   One complicating factor is that there are only a few plausible candidates (sometimes only one) for a given group leadership position. Maybe the people most likely to actually want those roles are the ones who are really sociable and gung-ho about EA, while the people who aren't as sociable (but have great epistemics) go into other positions. This state of affairs allows for "EA leaders love epistemics" and "group leaders stand out for other traits" at the same time.   Finally, you mentioned "familiarity" as a separate trait from epistemics, but I see them as conceptually similar when it comes to thinking about group leaders. Common questions I see about group leaders include "could this person explain these topics in a nuanced way?" and "could this person successfully lead a deep, thoughtful discussion on these topics?" These and other similar questions involve familiarity, but also the ability to look at something from multiple angles, engage seriously with questions (rather than just reciting a canned answer), and do other "good epistemics" things.

Fwiw, my intuition is that EA hasn't been selecting against, e.g. good epistemic traits historically, since I think that the current community has quite good epistemics by the standards of the world at large (including the demographics EA draws on). Of course, current EA community-building strategies may have caused that to change, but, fwiw, I doubt it.

I also think that highly engaged EAs may generally be substantially more valuable, meaning that focusing on that makes sense, but would be interested in empirical analyses from community-builders.

Fwiw, my intuition is that EA hasn't been selecting against, e.g. good epistemic traits historically, since I think that the current community has quite good epistemics by the standards of the world at large (including the demographics EA draws on).


I think it could be the case that EA itself selects strongly for good epistemics (people who are going to be interested in effective altruism have much higher epistemic standards than the world or large, even matched for demographics), and that this explains most of the gap you observe, but also that some actions/policies by EAs still select against good epistemic traits (albeit in a smaller way).

I think these latter selection effects, to the extent they occur at all, may happen despite (or, in some cases, because of) EA's strong interest in good epistemics. e.g. EAs care about good epistemics, the criteria they use to select for good epistemics are in practice  the person expressing positions/arguments they believe are good ones, this functionally selects more for deference than good epistemics.

8
Thomas Kwa
I think it's simultaneously true that highly engaged EAs are much more valuable, and that community builders shouldn't focus primarily on maximizing the number of HEAs. This is due to impact having significant dependence on talent and other factors orthogonal to engagement.

He recently described something to me as ‘too EA’. When I pushed him to elaborate, what he meant was something like ‘has the appearance of inviting you to make your own choice but is not-so-subtly trying to push you in a specific direction’.


This reminds me of Bible Study groups where there where discussion was encouraged but never approved of, some of which I led (badly).  I have empathy for those leading these.

As a leader, it is a genuinely hard to balance:

  • allowing discussion
  • staying on topic
  • pointing out the best answers
  • allowing a safe space for disagreement
     

I agree with the author on criticisms but I have let a lot of group discussions and I do find it really hard.

My suggestion here is to have 2 people leading the group, one who will take the role of moderator - to ask questions and move the group on. And one who will argue the EA point of view, and at times be shut down by the moderator.

There's a joke that whatever the question is in Bible Study, the correct answer is always 'God', 'Jesus', or 'The Bible'. I think it would be bad if the EA equivalent to that became 'AI', 'Existential risk' and 'Randomised controlled trials' .

On the other hand, discussion relies on  people having a shared pool of information, and I think it's very easy to overestimate how much common information people share. I've found in group discussions it's common that someone who's not an regular to the discussions will bring a whole set of talking points, articles, authors, ideas etc that I had no idea even existed till then. Which is great, except I don't know what to say in response except 'uh, what was the name of that? I'll have to read into it' .

5
Marcel D
Yeah, I recall my university organizing days and the awkwardness/difficulty of trying to balance "tell me about the careers you are interested in and why" and "here are the careers that seem highly impactful according to research/analysis." I frequently thought things like "I'd like for people to have a way for people to share their perspective without feeling obligated to defend it, but I also don't want to blanket-validate everyone's perspectives by simply not being critical."

The comment below is made in a personal capacity, and is speaking about a specific part of the post, without intending to take a view on the broader picture (though I might make a broader comment later if I have time).

Thanks for writing this.  I particularly appreciated this example:

A friend of mine at a different university attended the EA intro fellowship and found it lacking. He tells me that in the first session, foundational arguments were laid out, and he was encouraged to offer criticism. So he did. According to him, the organisers were grateful for the criticism, but didn’t really give him any satisfying replies. They then proceeded to build on the claims about which he remained unconvinced, without ever returning to it or making an effort to find an answer themselves.

I'm pretty worried about this. I got the impression from the rest of your post that you suspect some of the big picture problem is community builders focusing too much on what will work to get people into AI safety,  but I think this particular failure mode is also a huge issue for people with that aim. The sorts of people who will hear high-level/introductory arguments and immediately be able come u... (read more)

[anonymous]27
0
0

+1 to the concern on epistemics, that is one of my bigger concerns also.

Really excited for the new syllabus! Please do share it when it's ready :)

[anonymous]24
0
0

Very interesting. I haven't come into contact with any student groups, so can't comment on that. But here's my experiences of what's worked well and less well coming in as a longtime EA-ish giver in my late 30s looking for a more effective career:


Good

(Free) books:  I love books - articles and TED talks are fine for getting a quick and simple understanding of something, but nothing beats the full understanding from a good book. And some of the key ones are being given away free! Picking out a few, the Alignment Problem, The Precipice and Scout Mindset give a grounding in AI alignment, longtermism/existential risk and rational thinking techniques, and once you have a handful under your belt you're in a solid place to understand and contribute to some discussions. They're good writers too; it's not just information transfer. The approach of 'here's a free book, go away and read it, here's some resources if you want to research further' sounds like the polar opposite of what's described above. It worked well for me. Maybe a proper 'EA book starter list' would help it work even better (there's a germ of this lurking halfway down the page here, but surely this could be standalone an... (read more)

Occasionally, apparent coldness to immediate suffering:  I've only seen this a bit, but even one example could be enough to put someone off for good.

I would really like to ban the term "rounding error".

1[anonymous]
I haven't come across this yet... is it what I think it is?
2
freedomandutility
Yep. It seems pretty easy to optimise for consequentialist impact and still be more virtuous and principled than most people. Maybe EA can lead to bad moral licensing effects in some people.
2
Peter Elam
I really like that piece that you linked to. Thanks for including it.
1
Arepo
In case anyone isn't aware of it, that's very much the demographic that CEEALAR (aka the EA hotel) is trying to support!

I'm curious whether community size, engagement level, and competence might matter less than the general perception of EA among non-EAs. 

Not just because low general positive perception of EA makes it harder to attract highly engaged, competent EAs. But also because general positive perception matters even if it never results in conversion. General positive perception increases our ability to cooperate with and influence non-EA individuals and institutions.

Suppose an aggressive community building tactic attracts one HEA, of average competence. In addition, it gives a number of people n a slightly negative view of EA -- not a strongly felt opposition, just enough of a dislike that they mention it in conversations with other non-EAs sometimes. What n would we accept to make this community building tactic expected value neutral? (This piece seems to suggest that many current strategies fit this model.)

Thank you for the labor of writing this post, which was extremely helpful to me in clarifying my own thinking and concerns. I plan to share it widely.

"I think it would be tempting to assume that the best of these people will already have intuited the importance of scope sensitivity and existential risk, and that they’ll therefore know to give EA a chance, but that’s not how it works." This made my heart sing. EA would be so much better if more people understood this.

I found this post really useful (and persuasive), thank you!

One thing I I feel unconvinced about:

"Another red flag is the general attitude of persuading rather than explaining."

For what it's worth, I'm not sure naturally curious/thoughtful/critical people are particularly more put off by someone trying to persuade them (well/by answering their objections/etc.) than by them explaining an idea, especially if the idea is a normative thesis. It's weird for someone to be like "just saying the idea is that X could have horrific side effects and little upside because [argument]. Yes I believe that's right. No need to adopt any beliefs or change your actions though!" That just makes them seem like they don't take their own beliefs seriously. I'd much rather have someone say "I want to persuade you that X is bad, because I think it's important people know that so they can avoid X. OK here' goes: [argument]."

If that's right, does it mean that maybe the issue is more "persuade better"? e.g. by actually having answers when people raise objections to the assumptions being made?

At the opening session [Alice] disputes some of the assumptions, and the facilitators thank her for raising the concerns, but don’t really address them. They then plough on, building on those assumptions. She is unimpressed.

Seems like the issue here is more being unpersuasive, rather than too zealous or not focused enough of explaining.

5
nananana.nananana.heyhey.anon
I agree with you. Yet I bristle when people who I don’t know well start putting forth arguments to me about what is good/bad for me, especially in a context where I wasn’t expecting it. I’m much more accustomed to people thinking that moral relativism is polite, at least at first. Moral relativism can be annoying, but putting forth strong moral positions at eg a fresher’s fair does feel like something that missionaries do.

I like this criticism, but I think there are two essentially disjoint parts here that are being criticized. The first is excess legibility, i.e., the issue of having explicit metrics and optimizing to the metrics at all. The second is that a few of the measurements that determine how many resources a group gets/how quickly it grows are correlated with things that are not inherently valuable at best and harmful at worst. 

The first problem seems really hard to me: the legibility/autonomy trade-off is an age-old problem that happens in politics, business, and science, and seems to involve a genuine trade-off between organizational efficiency and the ability to capitalize on good but unorthodox ideas and individuals.

The second seems more accessible (though still hard), and reasonably separable from the first. Here I see a couple of things you flag (other than legibility/"corporateness" by itself) as parameters that positively contribute to growth but negatively contribute to the ability of EA to attract intellectually autonomous people. The first is "fire-and-brimstone" style arguments, where EA outreach tends to be all-or-nothing, "you either help save the sick children or you bu... (read more)

Thanks for this post! I used to do some voluntary university community building, and some of your insights definitely ring true to me, particularly the Alice example - I'm worried that I might have been the sort of facilitator to not return to the assumptions in fellowships I've facilitated.

A small note:

Well, the most obvious place to look is the most recent Leader Forum, which gives the following talent gaps (in order):

This EA Leaders Forum was nearly 3 years ago, and so talent gaps have possibly changed. There was a Meta Coordination Forum last year run by CEA, but I haven't seen any similar write-ups. This doesn't seem to be an important crux for most of your points, but thought would be worth mentioning.

When I came to university I had already read a lot of the Sequences ... 

 

You'd read the Sequences but you thought we were a cult? Inconceivable! 

(/sarcasm)

Oddly, while I agree with much of this post (and strong upvoted), it reads  to me as evidencing many of the problems it describes! Almost of the elements that make EA seem culty seem to me to hail from the rationality side of the movement: Pascalian reasoning, in-group  jargon, hero worship, or rather epistemic deferral to heroes and to holy texts, and eschatology (tithes being the one counterexample I can think of), all of which I see in the OP. 

I don't know what conclusion one is supposed to draw from this, but it disposes me both toward agreeing with your critique and toward greater scepticism that following your recommendations would do much to fix the problem.

I also don't have any great answers, but I do strongly feel that one can be an extremely valuable EA without having heard of the sequences. I understand the efficiency of jargon, but I think in 90% of EA conversations where I hear it used, communicating more literally would have outweighed the efficiency loss - and that's without considerin... (read more)

Almost of the elements that make EA seem culty seem to me to hail from the rationality side of the movement: Pascalian reasoning, in-group  jargon, hero worship, or rather epistemic deferral to heroes and to holy texts, and eschatology

 

The hero worship is I think especially concerning and is a striking way that implicit/"revealed" norms contradict explicit epistemic norms for some EAs

Thanks for writing this - this resonates a lot with my experience, as I was also exposed to and very put off from EA in college! But have eventually, slowly, made my way back here :)

I want to add that many of the "disconcerting" tactics community builders use are pretty well-established among community organizers (and larger student groups, like Greek life). So my sense is that the key problem lies in EA using well-proven community building tactics, but implementing them poorly. Having a scripted 1:1, a CRM, intro talks; making leadership asks of younger and newer members; measuring success by gaining new members; and trying our best to connect someone's interests to the values and goals of our community are all very standard practice in community organizing. (They're also very sales-y tactics, which is probably why they feel off-putting and slimy. I think most policy and entrepreneur types would be aware of this as long as they had some experience in the field, but perhaps students might not be.)

I'm not sure what exactly EA is doing wrong, or where the line between "wholesome supportive community" and "creepy cult" is, and I'd love to think about this more. My intuition is that EA... (read more)

Thanks so much for writing this. As someone interested in starting to do community building at a university, this was helpful to read, especially the Alice/Bob example and the concrete advice. I do really think that EA could stand to be less big on recruiting HEAs. I think there are tons of people who are interested in EA principles but aren't about to make a career switch, and it's important for those people to feel welcome and like they belong in the community.

I was going to write "I kind of wish this post (or a more concise version) were required reading for community builders," and then I thought better of it and took actions about it -- namely, sent the link as feedback to the EA Student Group Handbook and made an argument that they should incorporate something like this into their guide for student groups.

A bunch of disorganized thoughts related to this post:

  • Fast growth still does lots of good, especially if you have short AI timelines. If the current policy of growth brings lots of adverse selection, the optimal policy might change to double the number of top AI safety researchers every 18 months, rather than double the number of HEAs every  12 months.
  • I think more potential top people are put off by EA groups having little overlap with their other interests, than are suspicious of EA being manipulative. This can be mitigated by focusing more on the object level, like discussion of problems in alignment, altpro, policy, or whatever.
  • People are commonly made uncomfortable by community-builders visibly optimizing against them. But we have to optimize. I think the solution here is to create boundaries so you're not optimizing against people. When talking about career changes, I think it's good to help the person preserve optionality so they're not stuck in an EA career path with little career capital elsewhere. I've also found it helpful to come at 1-1s with the frame "I'll help you optimize for your values".
  • The "Scaling makes them worse" section implies a tension between two cause
... (read more)
1[anonymous]
| I think the solution here is to create boundaries so you're not optimizing against people. I prefer 80,000 Hours' 'plan changes' metric to the 'HEA' one for this reason (if I've understood you correctly).

introducing people to EA by reading prepared scripts

Huh, I'm not familiar with this, can you post a link to an example script or message me it?

I agree that reading a script verbatim is not great, and privately discussed info in a CRM seems like an invasion of privacy.

Privately discussed info in a CRM seems like an invasion of privacy.

I've seen non-EA college groups do this kind of thing and it seems quite normal. Greek organizations track which people come to which pledge events, publications track whether students have hit their article quota to join staff, and so on.

Doesn't seem like an invasion of privacy for an org's leaders to have conversations like "this person needs to write one more article to join staff" or  "this person was hanging out alone for most of the last event, we should try and help them feel more comfortable next time".

I keep going back and forth on this.

My first reaction was "this is just basic best practice for any people-/relationship-focused role, obviously community builders should have CRMs".

Then I realised none of the leaders of the student group I was most active in had CRMs (to my knowledge) and I would have been maybe a bit creeped out if they had, which updated me in the other direction.

Then I thought about it more and realised that group was very far in the direction of "friends with a common interest hang out", and that for student groups that were less like that I'm still basically pro CRMs. This feels obviously true for "advocacy" groups (anything explicitly religious or political, but also e.g. environmentalist groups, sustainability groups, help-your-local-community groups, anything do-goody). But I think I'd be in favour of even relatively neutral groups (e.g. student science club, student orchestras, etc) doing this.

Given how hard it is to keep any student group alive across multiple generations of leadership, not having a CRM is starting to seem very foolhardy to me.

2
ethai
I do community building with a (non-student, non-religious, non-EA) group that talks a lot about pretty sensitive topics, and we explicitly ask for permission to record things in the CRM. We don't ask "can we put you in our database?"; we phrase it as "hey, I'd love to connect you with XYZ folks in the chapter who have ABC in common with you, would you mind if I take some notes on what we talked about today, so I can share with them later?" But we take pretty seriously the importance of consent and privacy in the work that we're doing. Also, as someone who was in charge of recruitment at a sorority in college where ~half the student body was Greek-affiliated... yeah, community builders should have CRMs. We just don't call them CRMs; we call them "Potential New Member Sheet" or something.  It does feel a bit slimy, but I think this is pretty normal, and if done well, not likely to put off the folks we're worried about.

I get the impression many orgs set up to support EA groups have some version of this. Here are some I found on the internet:

Global Challenges Project has a "ready-to-go EA intro talk transcript, which you can use to run your own intro talk" here: https://handbook.globalchallengesproject.org/packaged-programs/intro-talks

EA Groups has "slides and a suggested script for an EA talk" here: https://resources.eagroups.org/events-program-ideas/single-day-events/introductory-presentations

To be fair, in both cases there is also some encouragement to adapt the talks, although I am not persuaded that this will actually happen much, and that when it does, it might still be obvious that you're seeing a variant on a prepared script.

I see, I thought you were referring to reading a script about EA during a one-on-one conversation. I don't see anything wrong with presenting a standardized talk, especially if you make it clear that EA is a global movement and not just a thing at your university. I would not be surprised if a local chapter of, say, Citizens' Climate Lobby, used an introductory talk created by the national organization rather than the local chapter.

I also misunderstood the original post as more like a "sales script" and less about talks. I also am surprised that people find having scripts for intro talks to be creepy, but perhaps secular Western society is just extremely oversensitive here (which is a preference we should respect if it's our target audience!)

9
Gavin
It's not just talks (as in presentations), it's also small-group discussions. 

My intuitive understanding of the Alice personality type (independent, skeptical, etc.) is that they are often very entrepreneurial (a skill EA desperately needs), but not usually "joiners". I have no doubt that a lot could be improved about community building, but there may always be some tension there that is difficult to resolve. 

It may be that the best we can hope for in a lot of those cases are people who understand EA ideas and use them to inform their work, but don't consider themselves EAs. That seems fine to me. Like person 1 in your real life example seems like a big win, even if they don't consider themselves EA. If the EA intro talk she attended helped get her on that track, then it "worked" for her in some sense. 

I'm definitely going to change my attitude to community building, to the extent I am involved with it, as a result of reading this. Making sure that criticisms are addressed to the satisfaction of the critic seems hugely important and I don't think I had grasped that before.

Thanks for posting this - it was an interesting and thoughtful read for me as a community builder. 

This summarised some thoughts I've had on this topic previously, and the implications on a large scale are concerning at the very least. In my experience, EAs growth over the past couple of years has meant bringing on a lot of people with specific technical expertise (or people who are seeking to gain this expertise) such as those working on AI safety/biorisk/etc, with a skillset that would broadly include mathematics, statistics, logical reasoning, and some level of technical expertise/knowledge of their field. Often (speaking anecdotally here) these would be the type of people who:

  1. are really good at working on detailed problems with defined parameters (eg. software developers)
  2. are very open to hearing things that challenge or further their existing knowledge, and will seek these things out
  3. will be easily persuaded by good arguments (and probably unlikely to push back if they find the arguments mostly convincing)

These people are pretty easy for community builders to deal with because there is a clear, forged pathway defined in EA for these people. Community builders can say, “Go d... (read more)

"If it’s also sufficiently likely that some people could figure this out and put us on a better path, then it seems really bad that we might be putting off those very people."

Here! When I was twelve, I spent four years finding the best way to benefit others, then I developed my skill-set to pursue a career in it... 26 years ago. So, I might qualify as one of those motivated altruists who is turned-off by the response they've gotten from EA. I think I'm one of the people you want to listen to carefully:

I don't need funding - I already devote 100% of my time as I choose, and I'm glad to give it all to each cause. I am looking to have the 1-to-2 hour long, 2-to-5 person thoughtful conversation, on literally dozens of existing and EA-adjacent topics. I am not looking for a 30min. networking/elevator-pitch at a conference, because I'm not trying to get hired as a PA. I am not looking for the meandering, noisy, distracted banter at a brief social event. This forum, unfortunately, has presented me with consistent misrepresentations and fallacies, which the commentators refuse to address when I point them out. Slack is similarly incapable of the deeper, thoughtful conversations, with membe... (read more)

"There are numerous ideas, opportunities, methods, that are going un-noticed because of the barriers placed in front of thoughtful dialogue. It is a burden that should rest upon those EAs who are dismissive of deeper conversation, instead of being the "price I have to pay, to prove myself, before anyone will listen", as I was most recently told on this Forum."
 

Your last paragraph is exactly what I'm worried about when considering engaging EA and exactly why I bring up "signalling" and "posturing" in my own post. I worry about the maturity of the community,  and the seriousness EA has about actually getting things done as opposed to being self-congratulatory on their enlightened approach.  I think most seasoned professionals don't have the patience for this kind of dynamic. However, I've yet to determine for myself the extent that this dynamic actually exists in the community.

7
Anthony Repetto
"Remember that the marginal value of another HEA is way lower than the marginal value of an actual legitimate criticism of EA nobody else has considered yet." Thank you for saying it!
3
Jeremy
I sympathize with this as it does seem like there aren't currently a ton of these opportunities like this. This is a pretty strong statement that seems like it would benefit from some examples to support it - though maybe it is beside the point as the forum probably isn't going to be the "1-to-2 hour long, 2-to-5 person thoughtful conversation" you are looking for anyway.

Thank you for writing this post! I recently had a discussion with some EA intro fellowship participants who experienced that EA is very demanding with expectations on changing your career etc and that it gives a very cultish or religious impression. Some said they are interested in EA and implementing some tools and mindsets in their life but that's it. I think we should embrace that too

Thanks so much for this extremely important and well-written post, Theo! I really appreciate it.

My main takeaway from this post (among many takeaways!) is that EA outreach and movement-building could be significantly better. I’m not sure yet on the clear next steps, but perhaps outreach could be even more individualized and epistemically humble.

One devil’s-advocate point on your point that “while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made... (read more)

Can confirm that other groups/subcultures have begun to see EA as a deceitful cult because of stuff like this

I've seen people make these complaints about EA since it first came to exist. 

As EA becomes bigger and better-known, I expect to see a higher volume of complaints even if the average person's impression remains the same/gets a bit better (though I'm not confident that's the case either).

This includes groups with no prior EA contact learning about it and deciding they don't like it — but I think they'd have had the same reaction at any point in EA's history.

Are there notable people or groups whose liking/trust of EA has, in your view, gone down over time?

Which stuff in particular?

4
Ondřej Kubů
More detail, please.

Assume that people find you more authoritative, important, and hard-to-criticise than you think you are. It’s usually not enough to be open to criticism - you have to actually seek it out or visibly reward it in front of other potential critics.

Chapter 7 in this book had a number of good insights on encouraging dissent from subordinates, in the context of disaster prevention.

Great post. I appreciate the framing around the real gaps in human capital. One additional concern I have is that aesthetics might play a counterproductive role in community building. For example, if EA aesthetics are most welcoming to people who are argumentative, perhaps even disagreeable, then the skill set of "one-on-one social skills and emotional intelligence" could be selected out (relatively speaking). 

As a community builder, I've lately thought about how much you can or should push and support other volunteers and new participants to engage more with EA. That could be offering 1-1 calls, sending private and group messages about specific opportunities and asking for help in organizing events among else. For context, this is mostly a reflection on what I think we (the other organizers and I) should maybe do in EA Finland.

Arguments for more pushing: 
I obviously believe what we're doing as a community is important and want more people to engage more in... (read more)

Some of these problems were discussed in part 4 of Hear This Idea podcast with Andres Sandberg. As far as I remember, he claimed that the growth of EA may slow down because the utilitarian framework may put off people with different ethical fundamentals.

https://hearthisidea.com/episodes/anders

Hi! I personally am interested in EA from the standpoint of government policy as well as social and emotional skills. If anyone has any suggestions on how I can get more involved let me know.

[anonymous]2
0
0

The groups I floated in were a mix of EAs and non-EAs, but eventually it rubbed off on me. And I’m pretty sure that if I hadn’t encountered EA in university it would have rubbed off a lot sooner.


What does "it rubbed off on me" mean here? I'm puzzling over this passage, and I keep thinking of the common usage in which "an idea rubs off on one" means that one adopts that idea. Do you use "it rubbed off on me" to mean that you lost agreement with "it"? What is "it"?

They are surprised that somebody interested in EA might be unhappy to discover that the committee members have been recording the details of their conversation in a CRM without asking.

Side note: morality aside, in Europe this is borderline illegal, so seems like a very bad idea.

3
Ben_West🔸
Can you clarify why you think it's "borderline illegal"? I assume you are referring to GDPR, but I'm not aware of any reason why the normal "legitimate interest" legal basis wouldn't apply to group organizers.
1
Arepo
Maybe I'm just wrong. I only have a lay understanding of GDPR, but my impression was that keeping any data that people had shared with you without their knowledge was getting into sketchy territory.

The increasing focus on Longtermism and X-risk has made us look cultish and unrelatable.

It was much harder for people to criticise EA as cultish when we were mainly about helping poor people from starving or dying of preventable disease because everyone can see immediately that those are worthy goals. X-risk and Longtermism don't make the same intuitive sense to people, so people dismiss the movement as weird and wrong.

We should lean back towards focusing on global development

I agree with paragraph 1 and 2 and disagree with paragraph 3 :)

That is: I agree longtermism and x-risk are much more difficult to introduce to the general population. They're substantially farther from the status quo and have weirder and more counterintuitive implications.

However, we don't choose what to talk about by how palatable it is. We must be guided by what's true, and what's most important. Unfortunately, we live in a world where what's palatable and what's true need not align.

To be clear, if you think global development is more important than x-risk, it makes sense to suggest that we should focus that way instead. But if you think x-risk is more important, the fact that global development is less "weird" is not enough reason to lean back that way.

2
David Mathers🔸
I suspect that it varies within the domain of X-risk focused work how weird and cultish it looks to the average person. I think both A.I. risk stuff and a generic "reduce extinction risk" framing will look more "religious" to the average person than "we are worried about pandemics an nuclear wars."