This is a special post for quick takes by Sophia. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
The goal of this short-form post: to outline what I see as the key common ground between the “big tent” versus “small and weird” discussions that have been happening recently and to outline one candidate point of disagreement.
Tl;dr:
Common ground:
Everyone really values good thinking processes/epistemics/reasoning transparency and wants to make sure we maintain that aspect of the existing effective altruism community
Impact is fat-tailed
We might be getting a lot more attention soon because of our increased spending and because of the August release of "what we owe the future" (and the marketing push that is likely to accompany it's release)[1]
A key point of disagreement: Does focusing on finding the people who produce the “tail” impact actually result in more impact?
One reason this wouldn’t be the case: “median” community building efforts and “tail” community building efforts are complements not substitutes. They are multipliers[2] of each other, rather than being additive and independent.
The additive hypothesis is simpler so I felt the multiplicative hypothesis needed some outlined mechanisms. Possible mechanisms:
Mechanism 1: the sorts of community building efforts that are more “median” friendly actually help the people who eventually create the “tail” impact become more interested in these ideas and more interested in taking bigger action with time
Mechanism 2: our biggest lever for impact in the future will not be the highly dedicated individuals but our influence on people on the periphery of the effective altruism (what I call “campground” effects)
Preamble (read: pre-ramble)
This is my summary of my vibe/impressions on some of the parts of the recent discussion that have stood out to me as particularly important. I am intending to finish my half a dozen drafts of a top-level post (with much more explanations for my random jargon that isn’t always even that common in effective altruism circles) at some point but I thought I’d start but sharing these rough thoughts to help get me over the “sharing things on the EA forum is scary” hump.
I might end up just sharing this post as a top-level post later once I’ve translated my random jargon a bit more and thought a bit more about the claims here I’m least sure of (possibly with a clearer outline of what cruxes make the “multiplicative effects” mechanisms more or less compelling)
Some common ground
These are some of my impressions of some claims that seem to be pretty common across the board (but that people sometimes talk as though they might suspect that the person they are talking to might not agree so I think it’s worth making them explicit somewhere).
The biggest one seems to be: We like the fact that effective altruism has good thinking processes/epistemics a lot! We don’t want to jeopardize our reasoning transparency and scout mindsets for the sake of going viral.
Impact is fat-tailed and this makes community-building challenging: there are a lot of uncomfortable trade-offs that might need to be made if we want to build the effective altruism community into a community that will be able to do as much good as possible.
We might be getting a lot more attention very soon whether we want to or not because we're spending more (and spending in places that get a lot of media attention like political races) and because there will be a big marketing push for "What We Owe the Future" to, potentially, a very big audience. [3]
A point of disagreement
It seems like there are a few points of disagreement that I intended to go into, but this one got pretty long so I’ll just leave this as one point:
Does focusing on “tail” people actually result in more impact?
Are “tail” work and “median” work complements or substitutes? Are they additive (so specialization in the bit with all the impact makes sense) or multiplicative (so doing both well is a necessary condition to getting “tails”)?
I feel like the “additive/substitutes” hypothesis is more intuitive/a simpler assumption so I’ve outlined some explicit mechanisms for the “multiplicative/complements” hypothesis.
Mechanisms for the “multiplicative/complements” hypothesis
Mechanism 1
“Tail” people often require similar “soft” entry points to “non-tail” people and focusing on the “median” people on some dimensions actually is better at getting the “tails” people because we just model “tail” people wrong (e.g. someone could think it looks like some people were always going to be tails, but in reality, when we deep-dive into individual “tail” stories, there was, accidentally, “soft” entry points).
The dimensions where people advocate for lowering the bar are not epistemics/thinking processes, but things like
language barriers (e.g. reducing jargon, finding a plain English way to say something or doing your best to define the jargon when you use it if you think it’s so useful that it’s worth a definition),
making it easier for people to transition at their own pace from wherever they are to “extreme dedication” (and being very okay with some people stopping completely way before, and
reducing the social pressure to agree with the current set of conclusions by putting a lot more emphasis on a broader spectrum of plausible candidates that we might focus on if we’re trying to help others as much as possible (where “plausible candidates” are answers to the question of how we can help others the most with impartiality, considering all people alive today [4] or even larger moral circles/circle of compassion than that too, where an example of a larger group of individuals we might be wanting to help is all present and future sentient beings)
Mechanism 2
As we get more exposure, our biggest lever for impact might not the people that get really enthusiastic about effective altruism who go all the way to the last stop of the crazy train (what I might call our current tent), but the cultural memes we’re spreading to friends-of-friends-of-friends of people who have interacted with people in the effective altruism community or with the ideas and have strong views about them (positive or negative), which I have been calling “our campground” in all my essays to myself on this topic 🤣.
E.g. let’s say that the only thing that matters for humanity’s survival is who ends up in a very small number of very pivotal rooms,[5] it might be much easier to influence a lot of the people who are likely to be in those rooms a little bit to be thinking about some of the key considerations we hope they’d be considering (it’d be nice if we made it more likely that a lot of people might have the thought; “a lot might be at stake here, let’s take a breather before we do X”) than to get people who have dedicated their lives to reducing X-risk because effective altruism-style thinking and caring is a core part of who they are in those rooms.
As we get more exposure, it definitely seems true that “campground” effects are going to get bigger whether we like it or not.[6]
It is an open question (in my mind at least) whether we can leverage this to have a lot more impact or whether the best we can do is sit tight and try and keep the small core community on point.
Additive and multiplicative models aren't the only two plausible "approximations" of what might be going on, but they are a nice starting point. It doesn't seem outside the range of possibility that there are big positive feedback loops between "core camp efforts" and "campground efforts" (and all the efforts in between). If this is plausibly true, then the "tails" for the impact of the effective altruism community as a whole could be here.
This is a pretty arbitrary cutoff of what counts as a large enough moral circle to count under the broader idea behind effective altruism and trying to do the most good, but I like being explicit about what we might mean because otherwise people get confused/it’s harder to identify what is a disagreement about the facts and what is just a lack of clarity in the questions we’re trying to ask.
I like this arbitrary cutoff a lot because
1) anyone who cares about every single person alive today already has a ginormous moral circle and I think that’s incredible: this seems to be very much wide enough to get at the vibe of the widely caring about others, and 2) the crazy train goes pretty far, it is not at all obvious to me where the “right” stopping point is, I’ve got off a few stops along (my shortcut to avoid dealing with some crazier questions down the line, like infinite ethics, is “just” considering those in my light cone where it be simulated or not😅, not because I actually think this is all that reasonable, but because more thought on what “the answer” is seems to get in the way of me thinking hard about doing the things I think are pretty good which I think, on expectation, actually does more for what I’d guess I’ll care about if I had all of time to think about it.
this example is total plagiarism, see: https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/ (also has such a great discussion on multiplicative type effects being a big deal sometimes which I feel people in the effective altruism community think about less than we should: more specialization and more narrowing the focus isn't always the best strategy on the margin for maximizing how good things are and will be on expectation, especially as we grow and have more variation in people's comparative advantages within our community, and more specifically, within our set of community builders)
If our brand/reputation has lock-in for decades for a really long time, this could plausibly be a hinge of history moment for the effective altruism community. If there are ways of making our branding/reputation is as high fidelity as possible within the low-fidelity channels that messages travel virally, this could be a huge deal (ideally, once we have some goodwill from the broader "campground" we will have a bit of a long reflection to work out what we want our tent to look like 🤣😝).
Thanks Sophia. I think that you’ve quite articulately identified and laid out the difference between these two schools of thought/intuitions about community.
I’d like to see this developed further into a general forum post as I think it contributes to the conversation. FWIW my current take is that we’re more in a multiplicative world (for both the reasons you lay out) and that the lower cost solutions (like the ones you laid out) seem to be table stakes (and I’d even go further and say that if push came to shove I’d actively trade off towards focusing more on the median for these reasons).
Yeah, I definitely think there are some multiplicative effects.
Now I'm teasing out what I think in more detail, I'm starting to find the "median" and "tails" distinction, while useful, still maybe a bit too rough for me to decide whether we should do more or less of any particular strategy that is targeted at either group (which makes me hesitant to immediately put these thoughts as a top form post until I've teased out what my best guesses are on how we should maybe change our behaviour if we think we live in a "multiplicative" world).[1]
Here are some more of the considerations/claims (that I'm not all that confident in) that are swirling around in my head at the moment 😊.
tl;dr:
High fidelity communication is really challenging (and doubly so in broad outreach efforts).
However, broad outreach might thicken the positive tail of the effective altruism movement's impact distribution and thin the negative one even if the median outcome might result in a "diluted" effective altruism community.
Since we are trying to maximize the effective altruism community's expected impact, and all the impact is likely to be at the tails, we actually probably shouldn't care all that much about the median outcome anyway.
High fidelity communication about effective altruism is challenging (and even more difficult when we do broader outreach/try to be welcoming to a wider range of people)
I do think it is a huge challenge to preserve the effective altruism community's dedication to:
caring about, at least, everyone alive today; and
transparent reasoning, a scout mindset and more generally putting a tonne of effort into finding out what is true even if it is really inconvenient.
I do think really narrow targeting might be one of the best tools we have to maintain those things.
Some reasons why we might we want to de-emphasize filtering in existing local groups:
First reason One reason why focusing on this logic can sometimes be counter-productive because some filtering seems to just miss the mark (see my comment here for an example of how some filtering could plausibly be systematically selecting against traits we value).
Introducing a second reason (more fully-fleshed out in the remainder of this comment) However, the main reason I think that trying to leverage our media attention, trying to do broad outreach well and trying to be really welcoming at all our shop-fronts might be important to prioritise (even if it might sometimes mean community builders will have to sometimes spend less time spent focusing on the people who seem most promising) is not because of the median outcome from this strategy.
Trying to nail campground effects is really, really, really hard while simultaneously trying to keep effective altruism about effective altruism. However, we're not trying to optimize for the median outcome for the effective altruism community, we're trying to maximize the effective altruism community's expected impact. This is why, despite the fact that "dilution" effects seem like a huge risk, we probably should just aim for the positive tail scenario because that is where our biggest positive impact might be anyway (and also aim to minimize the risks of negative tail scenarios because that also is going to be a big factor in our overall expected impact).
"Median" outreach work might be important to increase our chances of a positive "tail" impact of the effective altruism community as a whole
It is okay for in most worlds for the effective altruism community to have very little impact in the end.
We're not actually trying to guarantee some level of impact in every possible world. We're trying to maximize the effective altruism movement's expected impact.
We're not aiming for a "median" effective altruism community, we're trying to maximize our expected impact (so it's okay if we risk having no impact if that is what we need to do to make positive tails possible or reduce the risk of extreme negative tail outcomes of our work)
Increasing the chances of the positive tails of the effective altruism movement
I think the positive tail impacts are in the worlds where we've mastered the synergies between "tent" strategies and "campground" strategies. If we can find ways of keeping the "tent" on point and still make use of our power to spread ideas to a very large number of people (even if the ideas we spread to a much larger audience are obviously going to be lower fidelity, we can still put a tonne of effort into which lower fidelity ideas are the best ones to spread to make the long-term future go really well).
Avoiding the negative tail impacts of the effective altruism movement
This second thought makes me very sad, but I think it is worth saying. I'm not confident in any of this because I don't like thinking about it so much because it is not fun. Therefore, these thoughts are probably a lot less developed than my "happier", more optimistic thoughts about the effective altruism community.
I have a strong intuition that more campground strategies reduce the risk of negative tail impacts of the effective altruism movement (though I wish I didn't have this intuition and I hope someone is able to convince me that this gut feeling is unfounded because I love the effective altruism movement).
Even if campground strategies make it more likely that the effective altruism movement has no impact, it seems completely plausible to me that that might still be a good thing.
A small and weird "cabal" effective altruism, with a lot of power and a lot of money, makes people feel uncomfortable for good reason. There are selection effects, but history is lined with small groups of powerful people who genuinely believed they were making the world a better place and seem, in retrospect, to have done a lot more harm than good.
More people understanding what we're saying and why makes it more likely that smart people outside our echo chamber can pushback when we're wrong. It's a nice safety harness to prevent very bad outcomes.
It is also plausible to me that a tent effective altruism movement might be more likely to achieve their 95th percentile plus positive impact as well as the 5th percentile and below very negative impact.
Effective altruism feels like a rocket right now and rockets aren't very stable. It intuitively feels easy to have a very big impact, when you do big, ambitious things in an unstable way, and not be able to easily control the sign of that big impact: there is a chance it is very positive or very negative.
I find it plausible that, if you're going to have a huge impact on the world, having a big negative impact is easier than having a big positive impact by a wide margin (doing good is just darn hard and there are no slam dunk answers[2]).[3] Even though we're thinking hard about how to make it good, I think it might just be really easy to make it bad (e.g. by bringing attention to the alignment problem, we might be increasing excitement and interest in the plausibility of AGI and therefore are going to get to AGI faster than if no-one talked about alignment).
I might post a high level post before I have finished teasing out my best guesses on what the implications might be because I find my views change so fast that it is really hard to ever finish writing down what I think and it is possibly still better for me to share some of my thoughts more publicly than to share none of them. I often feel like I'm bouncing around like a yo-yo and I'm hoping at some point my thoughts will settle down somewhere on an "equilibrium" view instead of continuously thinking up considerations that cause me to completely flip my opinion (and leave me saying inconsistent things left, right and center because I just don't know what I think quite yet 😝🤣😅). I have made a commitment bet with a friend to post something as a top-level post within two weeks so I will have to either give a snapshot view then or settle on a view or lose $$ (the only reason I got a finished the top level short-form comment that started this discussion was because of a different bet with a different friend 🙃😶🤷🏼♀️). At the very least, I hope that I can come with a more wholesome (but still absolutely true) framing of a lot of the considerations I've outlined in the remainder of this post as I think it over more.
I think it was Ben Garfinkel said the "no slam dunk" answers thing in his post on suspicious convergence when it comes to arguments about AI risk but I'm too lazy to chase it up to link it (edit: I did go try and chase up the link to this, I think my memory had maybe merged/mixed together this post by Gregory Lewis on suspicious convergence and this transcript from a talk by Ben Garfinkel, I'm leaving both links in this footnote because Gregory Lewis' post is so good that I'll use any excuse to leave a link to it wherever I can even though it wasn't actually relevant to the "no slam dunk answers" quote)
I agree that we need scope for people to gradually increase their commitment over time. Actually, that's how my journey has kind of worked out.
On the other hand, I suspect that tail people can build a bigger and more impactful campfire. For example, one Matt Yglesias occasionally posting positive things about EA or EA adjacent ideas increases our campfire by a lot and these people are more likely to be the ones who can influence things.
Yeah, but what people experience when they hear about EA via someone like Matt will determine their further actions/beliefs about EA. If they show up and unnecessarily feel unwelcome or misunderstand EA then we’ve not just missed and opportunity then and there but potentially soured them for the long term (and what they say to others will spur other before we get a chance to reach them).
Hey Chris 😊, yeah, I think changing your mind and life in big ways overnight is a very big ask (and it's nice to feel like you're welcome to think about what might be true before you decide whether to commit to doing anything about it -- it helps a lot with the cognitive dissonance we all feel when our actions, the values we claim to hold ourselves to and what we believe about the world are at odds[1]).
I also completely agree with some targeting being very valuable. I think we should target exceptionally caring people who have exceptional track-records of being able to accomplish the stuff they set out to accomplish/the stuff they believe is valuable/worthwhile. I also think that if you spend a tonne of time with someone who clearly isn't getting it even though they have an impressive track record in some domain, then it makes complete sense to use your marginal community building time elsewhere.
However, my guess is that sometimes we can filter too hard, too early for us to get the tail-end of the effective altruism community's impact.
It is easy for a person to form an accurate impression of another person who is similar to them. It is much harder for a person to quickly form an accurate impression of another person who is really different (but because of diminishing returns, it seems way more valuable on the margin to get people who are exceptional in a different way to the way that the existing community tends to be exceptional than another person who thinks the same way and has the same skills).
and we want people to make it easier for people to align these three things in a direction that leads to more caring about others and more seeing the world the way it is (we don't want to push people away from identifying as someone who cares about others or from shying away from thinking about how the world). If we push too hard on all three things at once, I think it is much easier for people to align these three things by either deciding they actually don't value what they thought they value, they actually don't really care about others, or they might find it incredibly hard to see the world exactly as it is (because otherwise their values and their actions will have this huge gap)
EDIT: Witness this train-wreck of me figuring out what I maybe think in real time half-coherently below as I go :P[1]
yeah, I guess an intuition that I have is there are some decisions where we can gain a lot of ground by focusing are efforts in places where it is more likely we come across people who are able to create tail impacts over their lifetimes (e.g. by prioritising creating effective altruism groups in places with lots of people who have a pre-existing track record of being able to achieve the things they set out to achieve). However, I feel like there are some places where more marginal effort on targeting the people who could become tails has sharp diminishing returns and comes with some costs that might not actually be worth it. For example, once you have set up a group in a place where people who have track records of achieving things they set their minds to to a really exceptional degree, trying to figure out how "tail potential" someone is from there often can make people who might have been tail potential if they had been guided in a helpful way completely put off from engaging with us at all.
This entire thread is not actually recommended reading but keeping it here because I haven't yet decided whether I endorse it or not and I don't see it as that much dis-utility in leaving it here in the meantime while I think about this more.
I'm also not sure, once we're already targeting people who have track records of doing the things they've put their minds to (which obviously won't be a perfect proxy for tail potential but it often seems better than no prioritisation of where the marginal group should go), I'm not sure how good we are at assessing someone's "tail potential", especially because there are going to be big marginal returns to finding people who have a different comparative advantage to the existing community (if it is possible to communicate the key ideas/thinking with high fidelity) who will have more of an inferential gap to cross before communication is efficient enough for us to be able to tell how smart they are/how much potential they have.
This impression comes from knowing people where I speak their language (metaphorically) and I also speak EA (so I can absorb a lot of EA content and translate it in a way they can understand) who are pretty great at reasoning transparency and updating in conversations with people whom they've got pre-established trust (which means when miscommunications inevitably happen, the base assumption is still that I'm arguing in good faith). They can't really demonstrate that reasoning transparency if the person they are talking to doesn't understand their use of language/their worldview well enough to see that it is actually pretty precise and clear and transparent once you understand what they mean by the words they use.
(I mainly have this experience with people who maybe didn't study maths or economics or something that STEM-y who I have other "languages" that mean I can still cross inferential gaps reasonably efficiently with them)
This is a proof of existence of these kinds of people. It doesn't really tell us all that much about what proportion of people without the backgrounds that make the EA language barrier a lot smaller (like philosophy, econ and STEM) are actually good at the thinking processes we value very highly that are taught a lot in STEM subjects.
I could have had this experience with people who I know and this still not mean that this "treating people with a huge amount of charity for the reason that some people might have the potential to have a tail impact even if we'd not guess it when we first meet them" is actually worth it overall. I've got a biased sample but I don't think it's irrational that this informs my inside view even if I am aware that my sample is likely to be heavily biased (I am only going to have built a common language with people/built trust with people if there is something that fuels our friendships -- the people who I want to be friends with are not random! They are people who make me feel understood or say things that I find thought-provoking or a number of other factors that kind of makes them naturally a very cherry-picked pool of people).
Basically, my current best guess is that being really open-minded and patient with people once your group is at a place where pretty much everyone has demonstrated they are a tail person in one way or another (whether that's because of their personal traits or because of their fortunate circumstances) will get us more people who have the potential to have a positive tail-end impact engaging with us enough for that potential to have a great shot of being realised.
EDIT: I copied and pasted this comment as a direct reply to Chris and then edited it to make it make more sense than it did the first time I wrote it and also to make it way nicer than my off-the-cuff/figuring-out-what-thought-as-I-went stream-of-consciousness but I left this here anyway partly for context for the later comments and also because I think it's kind of fun to have a record (even if just for me) of how my thoughts develop as I write/tease out what sounds plausibly true once I've written it and what doesn't quite seem to hit the mark of what intuition I'm attempting to articulate (or what intuition that, once I find a way to articulate it, ends up seeming obviously false once I've written it up).
I am not arguing that we should not target exceptional people, I think exceptionally smart and caring people are way better to spend a lot of one-on-one time with than people who care an average amount about helping others and for whom there is a lot of evidence that they haven't yet got a track record of being able to accomplish things they set their minds to.
My guess is that sometimes we can filter too hard, too early for us to get the tail-end of the effective altruism community's impact.
It is easy for a person to form an accurate impression of another person who is similar to them. It is much harder for a person to quickly form an accurate impression of another person who is really different (but because of diminishing returns, it seems way more valuable on the margin to get people who are exceptional in a different way to the way that the existing community tends to be exceptional than another person who thinks the same way and has the same skills).
(I am not confident I will reflectively endorse much of the above in 24 hours from now, I'm just sharing my off-the-cusp vibes which might solidify into more or less confidence when I let these thoughts sit for a bit more time)
If my confidence in any of these claims substantially increases or decreases in the next few days I might come back and clarify that (but if doing this becomes a bit of an ugh field, I'm not going to prioritise de-ughing it because there are other ugh-fields that are higher on my list to prioritise de-ughing 😝)
I think there's a lot of value in people reaching out to people they know (this seems undervalued in EA, then again maybe it's intentional as evangelism can turn people off). This doesn't seem to trade-off too substantially against more formal movement-building methods which should probably filter more on which groups are going to be most impactful.
In terms of expanding the range of people and skills in EA, that seems to be happening over time (take for example the EA blog prize: https://effectiveideas.org/ ). Or the increased focus on PA's (https://pineappleoperations.org/). I have no doubt that there are still many useful skills that we're missing, but there's a decent chance that funding would be available if there was a decent team to work on the project.
If my confidence in any of these claims substantially increases or decreases in the next few days I might come back and clarify that (but if doing this becomes a bit of an ugh field, I'm not going to prioritise de-ughing it because there are other ugh-fields that are higher on my list to prioritise de-ughing 😝)
I suspect that some ways we filter at events of existing groups are good and we should keep doing them.
I also suspect some strategies/tendencies we have when we filter at the group level are counter-productive to finding and keeping high-potential people.
For example, filtering too fast based on how quickly someone seems to "get" longtermism might filter in the people who are more willing to defer and so seem like they get it more than they do.
It might filter out the people who are really trying to think it through, who seem more resistant to the ideas or who are more willing to voice their half-formed thoughts that haven't developed yet into something that deep (because thinking through all the different considerations to form an inside view takes a lot of time and voicing a lot of "dead-end" thoughts). Those higher value people might systematically be classed as "less tractable" or "less smart" when, in fact, it is sometimes[1] that we have just forgotten that people who are really thinking about these ideas seriously, who are smart enough to possibly be a person who could have a tail end impact, are going to say things that don't sound smart as they navigate what they think. The further someone is from our echo chamber, the stronger I expect this effect to be.
Obviously I don't know how most groups filter at the group-level, this is so dependent on the particular community organizers (and then also there are maybe some cultural commonalities across the movement which is why I find it tempting to make broad-sweeping generalisations that might not hold in many places).
but obviously not always (and I don't actually have a clear idea of how big a deal this issue is, I'm just trying to untangle my various intuitions so I can more easily scrutinize if there is a grain of truth in any of them on closer inspection)
Hmm... Some really interesting thoughts. I generally try to determine whether people are actually making considered counter-arguments vs. repeating cliches, but I take your point about a willingness to voice half-formed thoughts can cause others to assume you're stupid.
I guess in terms of outreach it makes sense to cultivate a sense of practical wisdom so that you can determine when to patiently continue a conversation or when to politely and strategically withdraw so as to save energy and avoid wasting time. This won't be perfect and it's subject to biases as you mentioned, but it's really the best option available.
Hmm, I'm not sure I agree with the claim "it's really the best option available" even if I don't already have a better solution pre-thought up. Or at the very least, I think that how to foster this culture might be worth a lot of strategic thought.
Even if there is a decent chance we end up concluding there isn't all that much we can do, I think the payoff to finding a good way to manage this might be big enough to make up for all the possible worlds where this work ends up being a dead-end.
Sorry Sophia but I still don't completely understand how what we're talking about maps on to actual decisions community builders are making. I'm still suspicious that many of us are sensing a vibe problem but misdiagnosing it as a messaging/cause prioritization problem.
I would find it really helpful if you could an example of how you percieve the big tent / small tent abstraction could map on to a concrete action which a community organiser takes.
Let's say someone was starting up a new a EA group at an Australian university - what's an example of a mistake you worry this person might make if they're too directly focused on chasing the "tails"?
Relevant context: Xavier used to do lots of community building and helped run an Australian university EA group
Yeah, I think vibes are a big deal, where vibes is pointing to something like "people have fun at events because the social dynamics feel good" (where "fun from social dynamics" is distinct from" fun from the intellectual stimulation of the philosophical puzzles" or other sources of fun).
Maybe this doesn't add anything to that/everything else that I would say I'm pointing to with the whole "median" targeting thing maybe also impacts vibes?
Campground/tent model is maybe more useful for understanding the role of vibes
The campground/tent distinction helps me form my thoughts more explicitly around the role that vibes play (I do think they are not the only thing in a vague cluster of related strategies that I want to point to, but they are definitely a huge chunk/a lot of the other strategies maybe are important because of their impact on "vibes").
In short, our "tent" is the current movement and the "campground" is everyone who isn't in the movement (got a top-level post in the works, privately shared a draft with Xavier already but for anyone else who wants to give feedback before I post it, please DM me!).
Vibes at events seem important for the campground, ie. for all the people who aren't involved.
For example, a positive campground effect of vibes could go something like:
Someone comes to an event who after has a great impression of effective altruism but they don't end up engaging much more, maybe because they just happen to get busy or it's not really quite their thing intellectually. Then, years later, they might go on to say nice things years later to their colleagues or friends who mention effective altruism. Those colleagues and friends can then engage with these ideas they agree with without being socially punished.
The point of the above comments and also the campground/tent model is more about establishing language that allows us to talk about individual community building strategies. I think often in community building discussions, a lot of different things get easily conflated because we haven't really found good ways of disentangling the various effects people are pointing to (which ends up maybe leading to inferential gaps not being closed because people can easily talk past each other without the models/language being well-defined enough). I do sometimes talk about example strategies because my motivation for wanting to create these models/language was that I was failing to articulate my intuitions around various community building strategies (but I think there is a decent chance that once I've found a way to clearly express what I initially thought, it will be obvious to me that I was wrong, so for now, I'm not really trying to say "this is what an Australian group should do, I'm more just trying to work out, "if this model was correct, what should an Australian group do".
I can totally imagine that after getting over the fact that it was your idea and not mine (so maybe give it a month or so), I might just end up deciding that the vibes model is superior to anything discussed here (though of course, current me's inside view is that that isn't the case or I'd just update straight away)!
According to current me. Who knows what the past me that wrote this stuff to begin with thought? I'll answer my own rhetorical question: you'd hope I would! My stab at a guess is that my views are becoming somewhat more coherent and also somewhat less confident with time as I think about this more (so you are currently dealing with moving goalposts, sorry!).
Basically, I'm not yet really trying to draw a map that I'm confident in, I'm more just trying to find a really good legend for many maps in order to then draw out the current main hypothesis maps with, hopefully, more precision and less confusion).
Also, in my mind, vibe management comes under the same cluster of communication-related things (overlapping categories include: framing/language/presentation and PR/marketing etc) that people are generally pointing to in these conversations. You seem to just be saying that the diagnosis is too broad and that there is a subset of stuff that is the only problem and it all comes under "vibes", which I disagree with (eg. see this comment and this comment for examples of something else I am trying to point to that doesn't seem to be captured by "vibes" which I think are the sorts of examples posts like this one and this one[1] are trying to point at)
This comment also is, AFAICT, pointing to something true too -- I think there is some narrow window in between that is maybe too high a standard but possibly worth aiming for (I'm yet to confidently decide) that we need some clear language to discuss in such a way that everyone is on the same page and can more easily find all the various relevant cruxes (if such language exists, not sure if my language will end up being that useful - but even if not, I think if there is such language that isn't that hard to find, then it could be useful for facilitating double-crux finding!)
The reputation of the effective altruism society on each campus seems incredibly important for the "effective altruism" brand among key audiences. E.g. Future Deepmind team leaders could come out of MIT, Harvard, Stanford etc.
Are we doing everything we could to leave people with an honest but still good impression? (whether or not they seem interested in engaging further)
The goal of this short-form post: to outline what I see as the key common ground between the “big tent” versus “small and weird” discussions that have been happening recently and to outline one candidate point of disagreement.
Tl;dr:
Preamble (read: pre-ramble)
This is my summary of my vibe/impressions on some of the parts of the recent discussion that have stood out to me as particularly important. I am intending to finish my half a dozen drafts of a top-level post (with much more explanations for my random jargon that isn’t always even that common in effective altruism circles) at some point but I thought I’d start but sharing these rough thoughts to help get me over the “sharing things on the EA forum is scary” hump.
I might end up just sharing this post as a top-level post later once I’ve translated my random jargon a bit more and thought a bit more about the claims here I’m least sure of (possibly with a clearer outline of what cruxes make the “multiplicative effects” mechanisms more or less compelling)
Some common ground
These are some of my impressions of some claims that seem to be pretty common across the board (but that people sometimes talk as though they might suspect that the person they are talking to might not agree so I think it’s worth making them explicit somewhere).
A point of disagreement
It seems like there are a few points of disagreement that I intended to go into, but this one got pretty long so I’ll just leave this as one point:
Does focusing on “tail” people actually result in more impact?
Are “tail” work and “median” work complements or substitutes? Are they additive (so specialization in the bit with all the impact makes sense) or multiplicative (so doing both well is a necessary condition to getting “tails”)?
I feel like the “additive/substitutes” hypothesis is more intuitive/a simpler assumption so I’ve outlined some explicit mechanisms for the “multiplicative/complements” hypothesis.
Mechanisms for the “multiplicative/complements” hypothesis
Mechanism 1
“Tail” people often require similar “soft” entry points to “non-tail” people and focusing on the “median” people on some dimensions actually is better at getting the “tails” people because we just model “tail” people wrong (e.g. someone could think it looks like some people were always going to be tails, but in reality, when we deep-dive into individual “tail” stories, there was, accidentally, “soft” entry points).
The dimensions where people advocate for lowering the bar are not epistemics/thinking processes, but things like
Mechanism 2
As we get more exposure, our biggest lever for impact might not the people that get really enthusiastic about effective altruism who go all the way to the last stop of the crazy train (what I might call our current tent), but the cultural memes we’re spreading to friends-of-friends-of-friends of people who have interacted with people in the effective altruism community or with the ideas and have strong views about them (positive or negative), which I have been calling “our campground” in all my essays to myself on this topic 🤣.
E.g. let’s say that the only thing that matters for humanity’s survival is who ends up in a very small number of very pivotal rooms,[5] it might be much easier to influence a lot of the people who are likely to be in those rooms a little bit to be thinking about some of the key considerations we hope they’d be considering (it’d be nice if we made it more likely that a lot of people might have the thought; “a lot might be at stake here, let’s take a breather before we do X”) than to get people who have dedicated their lives to reducing X-risk because effective altruism-style thinking and caring is a core part of who they are in those rooms.
As we get more exposure, it definitely seems true that “campground” effects are going to get bigger whether we like it or not.[6]
It is an open question (in my mind at least) whether we can leverage this to have a lot more impact or whether the best we can do is sit tight and try and keep the small core community on point.
As a little aside, I am so excited to get my hands on a copy (suddenly August doesn't seem so soon)!
Additive and multiplicative models aren't the only two plausible "approximations" of what might be going on, but they are a nice starting point. It doesn't seem outside the range of possibility that there are big positive feedback loops between "core camp efforts" and "campground efforts" (and all the efforts in between). If this is plausibly true, then the "tails" for the impact of the effective altruism community as a whole could be here.
this point of common ground was edited in after first posting this comment
This is a pretty arbitrary cutoff of what counts as a large enough moral circle to count under the broader idea behind effective altruism and trying to do the most good, but I like being explicit about what we might mean because otherwise people get confused/it’s harder to identify what is a disagreement about the facts and what is just a lack of clarity in the questions we’re trying to ask.
I like this arbitrary cutoff a lot because
1) anyone who cares about every single person alive today already has a ginormous moral circle and I think that’s incredible: this seems to be very much wide enough to get at the vibe of the widely caring about others, and
2) the crazy train goes pretty far, it is not at all obvious to me where the “right” stopping point is, I’ve got off a few stops along (my shortcut to avoid dealing with some crazier questions down the line, like infinite ethics, is “just” considering those in my light cone where it be simulated or not😅, not because I actually think this is all that reasonable, but because more thought on what “the answer” is seems to get in the way of me thinking hard about doing the things I think are pretty good which I think, on expectation, actually does more for what I’d guess I’ll care about if I had all of time to think about it.
this example is total plagiarism, see: https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/ (also has such a great discussion on multiplicative type effects being a big deal sometimes which I feel people in the effective altruism community think about less than we should: more specialization and more narrowing the focus isn't always the best strategy on the margin for maximizing how good things are and will be on expectation, especially as we grow and have more variation in people's comparative advantages within our community, and more specifically, within our set of community builders)
If our brand/reputation has lock-in for decades for a really long time, this could plausibly be a hinge of history moment for the effective altruism community. If there are ways of making our branding/reputation is as high fidelity as possible within the low-fidelity channels that messages travel virally, this could be a huge deal (ideally, once we have some goodwill from the broader "campground" we will have a bit of a long reflection to work out what we want our tent to look like 🤣😝).
Thanks Sophia. I think that you’ve quite articulately identified and laid out the difference between these two schools of thought/intuitions about community.
I’d like to see this developed further into a general forum post as I think it contributes to the conversation. FWIW my current take is that we’re more in a multiplicative world (for both the reasons you lay out) and that the lower cost solutions (like the ones you laid out) seem to be table stakes (and I’d even go further and say that if push came to shove I’d actively trade off towards focusing more on the median for these reasons).
Thanks Luke 🌞
Yeah, I definitely think there are some multiplicative effects.
Now I'm teasing out what I think in more detail, I'm starting to find the "median" and "tails" distinction, while useful, still maybe a bit too rough for me to decide whether we should do more or less of any particular strategy that is targeted at either group (which makes me hesitant to immediately put these thoughts as a top form post until I've teased out what my best guesses are on how we should maybe change our behaviour if we think we live in a "multiplicative" world).[1]
Here are some more of the considerations/claims (that I'm not all that confident in) that are swirling around in my head at the moment 😊.
tl;dr:
High fidelity communication about effective altruism is challenging (and even more difficult when we do broader outreach/try to be welcoming to a wider range of people)
I do think it is a huge challenge to preserve the effective altruism community's dedication to:
I do think really narrow targeting might be one of the best tools we have to maintain those things.
Some reasons why we might we want to de-emphasize filtering in existing local groups:
First reason
One reason why focusing on this logic can sometimes be counter-productive because some filtering seems to just miss the mark (see my comment here for an example of how some filtering could plausibly be systematically selecting against traits we value).
Introducing a second reason (more fully-fleshed out in the remainder of this comment)
However, the main reason I think that trying to leverage our media attention, trying to do broad outreach well and trying to be really welcoming at all our shop-fronts might be important to prioritise (even if it might sometimes mean community builders will have to sometimes spend less time spent focusing on the people who seem most promising) is not because of the median outcome from this strategy.
Trying to nail campground effects is really, really, really hard while simultaneously trying to keep effective altruism about effective altruism. However, we're not trying to optimize for the median outcome for the effective altruism community, we're trying to maximize the effective altruism community's expected impact. This is why, despite the fact that "dilution" effects seem like a huge risk, we probably should just aim for the positive tail scenario because that is where our biggest positive impact might be anyway (and also aim to minimize the risks of negative tail scenarios because that also is going to be a big factor in our overall expected impact).
"Median" outreach work might be important to increase our chances of a positive "tail" impact of the effective altruism community as a whole
It is okay for in most worlds for the effective altruism community to have very little impact in the end.
We're not actually trying to guarantee some level of impact in every possible world. We're trying to maximize the effective altruism movement's expected impact.
We're not aiming for a "median" effective altruism community, we're trying to maximize our expected impact (so it's okay if we risk having no impact if that is what we need to do to make positive tails possible or reduce the risk of extreme negative tail outcomes of our work)
Increasing the chances of the positive tails of the effective altruism movement
I think the positive tail impacts are in the worlds where we've mastered the synergies between "tent" strategies and "campground" strategies. If we can find ways of keeping the "tent" on point and still make use of our power to spread ideas to a very large number of people (even if the ideas we spread to a much larger audience are obviously going to be lower fidelity, we can still put a tonne of effort into which lower fidelity ideas are the best ones to spread to make the long-term future go really well).
Avoiding the negative tail impacts of the effective altruism movement
This second thought makes me very sad, but I think it is worth saying. I'm not confident in any of this because I don't like thinking about it so much because it is not fun. Therefore, these thoughts are probably a lot less developed than my "happier", more optimistic thoughts about the effective altruism community.
I have a strong intuition that more campground strategies reduce the risk of negative tail impacts of the effective altruism movement (though I wish I didn't have this intuition and I hope someone is able to convince me that this gut feeling is unfounded because I love the effective altruism movement).
Even if campground strategies make it more likely that the effective altruism movement has no impact, it seems completely plausible to me that that might still be a good thing.
A small and weird "cabal" effective altruism, with a lot of power and a lot of money, makes people feel uncomfortable for good reason. There are selection effects, but history is lined with small groups of powerful people who genuinely believed they were making the world a better place and seem, in retrospect, to have done a lot more harm than good.
More people understanding what we're saying and why makes it more likely that smart people outside our echo chamber can pushback when we're wrong. It's a nice safety harness to prevent very bad outcomes.
It is also plausible to me that a tent effective altruism movement might be more likely to achieve their 95th percentile plus positive impact as well as the 5th percentile and below very negative impact.
Effective altruism feels like a rocket right now and rockets aren't very stable. It intuitively feels easy to have a very big impact, when you do big, ambitious things in an unstable way, and not be able to easily control the sign of that big impact: there is a chance it is very positive or very negative.
I find it plausible that, if you're going to have a huge impact on the world, having a big negative impact is easier than having a big positive impact by a wide margin (doing good is just darn hard and there are no slam dunk answers[2]).[3] Even though we're thinking hard about how to make it good, I think it might just be really easy to make it bad (e.g. by bringing attention to the alignment problem, we might be increasing excitement and interest in the plausibility of AGI and therefore are going to get to AGI faster than if no-one talked about alignment).
I might post a high level post before I have finished teasing out my best guesses on what the implications might be because I find my views change so fast that it is really hard to ever finish writing down what I think and it is possibly still better for me to share some of my thoughts more publicly than to share none of them. I often feel like I'm bouncing around like a yo-yo and I'm hoping at some point my thoughts will settle down somewhere on an "equilibrium" view instead of continuously thinking up considerations that cause me to completely flip my opinion (and leave me saying inconsistent things left, right and center because I just don't know what I think quite yet 😝🤣😅). I have made a commitment bet with a friend to post something as a top-level post within two weeks so I will have to either give a snapshot view then or settle on a view or lose $$ (the only reason I got a finished the top level short-form comment that started this discussion was because of a different bet with a different friend 🙃😶🤷🏼♀️). At the very least, I hope that I can come with a more wholesome (but still absolutely true) framing of a lot of the considerations I've outlined in the remainder of this post as I think it over more.
I think it was Ben Garfinkel said the "no slam dunk" answers thing in his post on suspicious convergence when it comes to arguments about AI risk but I'm too lazy to chase it up to link it(edit: I did go try and chase up the link to this, I think my memory had maybe merged/mixed together this post by Gregory Lewis on suspicious convergence and this transcript from a talk by Ben Garfinkel, I'm leaving both links in this footnote because Gregory Lewis' post is so good that I'll use any excuse to leave a link to it wherever I can even though it wasn't actually relevant to the "no slam dunk answers" quote)maybe I just took my reading of HPMOR too literally :P
I agree that we need scope for people to gradually increase their commitment over time. Actually, that's how my journey has kind of worked out.
On the other hand, I suspect that tail people can build a bigger and more impactful campfire. For example, one Matt Yglesias occasionally posting positive things about EA or EA adjacent ideas increases our campfire by a lot and these people are more likely to be the ones who can influence things.
Yeah, but what people experience when they hear about EA via someone like Matt will determine their further actions/beliefs about EA. If they show up and unnecessarily feel unwelcome or misunderstand EA then we’ve not just missed and opportunity then and there but potentially soured them for the long term (and what they say to others will spur other before we get a chance to reach them).
Hey Chris 😊, yeah, I think changing your mind and life in big ways overnight is a very big ask (and it's nice to feel like you're welcome to think about what might be true before you decide whether to commit to doing anything about it -- it helps a lot with the cognitive dissonance we all feel when our actions, the values we claim to hold ourselves to and what we believe about the world are at odds[1]).
I also completely agree with some targeting being very valuable. I think we should target exceptionally caring people who have exceptional track-records of being able to accomplish the stuff they set out to accomplish/the stuff they believe is valuable/worthwhile. I also think that if you spend a tonne of time with someone who clearly isn't getting it even though they have an impressive track record in some domain, then it makes complete sense to use your marginal community building time elsewhere.
However, my guess is that sometimes we can filter too hard, too early for us to get the tail-end of the effective altruism community's impact.
It is easy for a person to form an accurate impression of another person who is similar to them. It is much harder for a person to quickly form an accurate impression of another person who is really different (but because of diminishing returns, it seems way more valuable on the margin to get people who are exceptional in a different way to the way that the existing community tends to be exceptional than another person who thinks the same way and has the same skills).
and we want people to make it easier for people to align these three things in a direction that leads to more caring about others and more seeing the world the way it is (we don't want to push people away from identifying as someone who cares about others or from shying away from thinking about how the world). If we push too hard on all three things at once, I think it is much easier for people to align these three things by either deciding they actually don't value what they thought they value, they actually don't really care about others, or they might find it incredibly hard to see the world exactly as it is (because otherwise their values and their actions will have this huge gap)
EDIT: Witness this train-wreck of me figuring out what I maybe think in real time half-coherently below as I go :P[1]
yeah, I guess an intuition that I have is there are some decisions where we can gain a lot of ground by focusing are efforts in places where it is more likely we come across people who are able to create tail impacts over their lifetimes (e.g. by prioritising creating effective altruism groups in places with lots of people who have a pre-existing track record of being able to achieve the things they set out to achieve). However, I feel like there are some places where more marginal effort on targeting the people who could become tails has sharp diminishing returns and comes with some costs that might not actually be worth it. For example, once you have set up a group in a place where people who have track records of achieving things they set their minds to to a really exceptional degree, trying to figure out how "tail potential" someone is from there often can make people who might have been tail potential if they had been guided in a helpful way completely put off from engaging with us at all.
This entire thread is not actually recommended reading but keeping it here because I haven't yet decided whether I endorse it or not and I don't see it as that much dis-utility in leaving it here in the meantime while I think about this more.
I'm also not sure, once we're already targeting people who have track records of doing the things they've put their minds to (which obviously won't be a perfect proxy for tail potential but it often seems better than no prioritisation of where the marginal group should go), I'm not sure how good we are at assessing someone's "tail potential", especially because there are going to be big marginal returns to finding people who have a different comparative advantage to the existing community (if it is possible to communicate the key ideas/thinking with high fidelity) who will have more of an inferential gap to cross before communication is efficient enough for us to be able to tell how smart they are/how much potential they have.
This impression comes from knowing people where I speak their language (metaphorically) and I also speak EA (so I can absorb a lot of EA content and translate it in a way they can understand) who are pretty great at reasoning transparency and updating in conversations with people whom they've got pre-established trust (which means when miscommunications inevitably happen, the base assumption is still that I'm arguing in good faith). They can't really demonstrate that reasoning transparency if the person they are talking to doesn't understand their use of language/their worldview well enough to see that it is actually pretty precise and clear and transparent once you understand what they mean by the words they use.
(I mainly have this experience with people who maybe didn't study maths or economics or something that STEM-y who I have other "languages" that mean I can still cross inferential gaps reasonably efficiently with them)
This is a proof of existence of these kinds of people. It doesn't really tell us all that much about what proportion of people without the backgrounds that make the EA language barrier a lot smaller (like philosophy, econ and STEM) are actually good at the thinking processes we value very highly that are taught a lot in STEM subjects.
I could have had this experience with people who I know and this still not mean that this "treating people with a huge amount of charity for the reason that some people might have the potential to have a tail impact even if we'd not guess it when we first meet them" is actually worth it overall. I've got a biased sample but I don't think it's irrational that this informs my inside view even if I am aware that my sample is likely to be heavily biased (I am only going to have built a common language with people/built trust with people if there is something that fuels our friendships -- the people who I want to be friends with are not random! They are people who make me feel understood or say things that I find thought-provoking or a number of other factors that kind of makes them naturally a very cherry-picked pool of people).
Basically, my current best guess is that being really open-minded and patient with people once your group is at a place where pretty much everyone has demonstrated they are a tail person in one way or another (whether that's because of their personal traits or because of their fortunate circumstances) will get us more people who have the potential to have a positive tail-end impact engaging with us enough for that potential to have a great shot of being realised.
EDIT: I copied and pasted this comment as a direct reply to Chris and then edited it to make it make more sense than it did the first time I wrote it and also to make it way nicer than my off-the-cuff/figuring-out-what-thought-as-I-went stream-of-consciousness but I left this here anyway partly for context for the later comments and also because I think it's kind of fun to have a record (even if just for me) of how my thoughts develop as I write/tease out what sounds plausibly true once I've written it and what doesn't quite seem to hit the mark of what intuition I'm attempting to articulate (or what intuition that, once I find a way to articulate it, ends up seeming obviously false once I've written it up).
I am not arguing that we should not target exceptional people, I think exceptionally smart and caring people are way better to spend a lot of one-on-one time with than people who care an average amount about helping others and for whom there is a lot of evidence that they haven't yet got a track record of being able to accomplish things they set their minds to.
My guess is that sometimes we can filter too hard, too early for us to get the tail-end of the effective altruism community's impact.
It is easy for a person to form an accurate impression of another person who is similar to them. It is much harder for a person to quickly form an accurate impression of another person who is really different (but because of diminishing returns, it seems way more valuable on the margin to get people who are exceptional in a different way to the way that the existing community tends to be exceptional than another person who thinks the same way and has the same skills).
(I am not confident I will reflectively endorse much of the above in 24 hours from now, I'm just sharing my off-the-cusp vibes which might solidify into more or less confidence when I let these thoughts sit for a bit more time)
If my confidence in any of these claims substantially increases or decreases in the next few days I might come back and clarify that (but if doing this becomes a bit of an ugh field, I'm not going to prioritise de-ughing it because there are other ugh-fields that are higher on my list to prioritise de-ughing 😝)
I think there's a lot of value in people reaching out to people they know (this seems undervalued in EA, then again maybe it's intentional as evangelism can turn people off). This doesn't seem to trade-off too substantially against more formal movement-building methods which should probably filter more on which groups are going to be most impactful.
In terms of expanding the range of people and skills in EA, that seems to be happening over time (take for example the EA blog prize: https://effectiveideas.org/ ). Or the increased focus on PA's (https://pineappleoperations.org/). I have no doubt that there are still many useful skills that we're missing, but there's a decent chance that funding would be available if there was a decent team to work on the project.
Makes sense
I suspect that some ways we filter at events of existing groups are good and we should keep doing them.
I also suspect some strategies/tendencies we have when we filter at the group level are counter-productive to finding and keeping high-potential people.
For example, filtering too fast based on how quickly someone seems to "get" longtermism might filter in the people who are more willing to defer and so seem like they get it more than they do.
It might filter out the people who are really trying to think it through, who seem more resistant to the ideas or who are more willing to voice their half-formed thoughts that haven't developed yet into something that deep (because thinking through all the different considerations to form an inside view takes a lot of time and voicing a lot of "dead-end" thoughts). Those higher value people might systematically be classed as "less tractable" or "less smart" when, in fact, it is sometimes[1] that we have just forgotten that people who are really thinking about these ideas seriously, who are smart enough to possibly be a person who could have a tail end impact, are going to say things that don't sound smart as they navigate what they think. The further someone is from our echo chamber, the stronger I expect this effect to be.
Obviously I don't know how most groups filter at the group-level, this is so dependent on the particular community organizers (and then also there are maybe some cultural commonalities across the movement which is why I find it tempting to make broad-sweeping generalisations that might not hold in many places).
but obviously not always (and I don't actually have a clear idea of how big a deal this issue is, I'm just trying to untangle my various intuitions so I can more easily scrutinize if there is a grain of truth in any of them on closer inspection)
Hmm... Some really interesting thoughts. I generally try to determine whether people are actually making considered counter-arguments vs. repeating cliches, but I take your point about a willingness to voice half-formed thoughts can cause others to assume you're stupid.
I guess in terms of outreach it makes sense to cultivate a sense of practical wisdom so that you can determine when to patiently continue a conversation or when to politely and strategically withdraw so as to save energy and avoid wasting time. This won't be perfect and it's subject to biases as you mentioned, but it's really the best option available.
Hmm, I'm not sure I agree with the claim "it's really the best option available" even if I don't already have a better solution pre-thought up. Or at the very least, I think that how to foster this culture might be worth a lot of strategic thought.
Even if there is a decent chance we end up concluding there isn't all that much we can do, I think the payoff to finding a good way to manage this might be big enough to make up for all the possible worlds where this work ends up being a dead-end.
Well, if you think of anything, let me know.
👍🏼
Oh, here's another excellent example, the EA Writing Retreat.
😍
Yeah, this is happening! I also think it helps a lot that Sam BF has a really broad spectrum of ideas take of longtermism, which is really cool!
Sorry Sophia but I still don't completely understand how what we're talking about maps on to actual decisions community builders are making. I'm still suspicious that many of us are sensing a vibe problem but misdiagnosing it as a messaging/cause prioritization problem.
I would find it really helpful if you could an example of how you percieve the big tent / small tent abstraction could map on to a concrete action which a community organiser takes.
Let's say someone was starting up a new a EA group at an Australian university - what's an example of a mistake you worry this person might make if they're too directly focused on chasing the "tails"?
Relevant context: Xavier used to do lots of community building and helped run an Australian university EA group
Yeah, I think vibes are a big deal, where vibes is pointing to something like "people have fun at events because the social dynamics feel good" (where "fun from social dynamics" is distinct from" fun from the intellectual stimulation of the philosophical puzzles" or other sources of fun).
Maybe this doesn't add anything to that/everything else that I would say I'm pointing to with the whole "median" targeting thing maybe also impacts vibes?
Campground/tent model is maybe more useful for understanding the role of vibes
The campground/tent distinction helps me form my thoughts more explicitly around the role that vibes play (I do think they are not the only thing in a vague cluster of related strategies that I want to point to, but they are definitely a huge chunk/a lot of the other strategies maybe are important because of their impact on "vibes").
In short, our "tent" is the current movement and the "campground" is everyone who isn't in the movement (got a top-level post in the works, privately shared a draft with Xavier already but for anyone else who wants to give feedback before I post it, please DM me!).
Vibes at events seem important for the campground, ie. for all the people who aren't involved.
For example, a positive campground effect of vibes could go something like:
Someone comes to an event who after has a great impression of effective altruism but they don't end up engaging much more, maybe because they just happen to get busy or it's not really quite their thing intellectually. Then, years later, they might go on to say nice things years later to their colleagues or friends who mention effective altruism. Those colleagues and friends can then engage with these ideas they agree with without being socially punished.
The point of these models [1]
The point of the above comments and also the campground/tent model is more about establishing language that allows us to talk about individual community building strategies. I think often in community building discussions, a lot of different things get easily conflated because we haven't really found good ways of disentangling the various effects people are pointing to (which ends up maybe leading to inferential gaps not being closed because people can easily talk past each other without the models/language being well-defined enough). I do sometimes talk about example strategies because my motivation for wanting to create these models/language was that I was failing to articulate my intuitions around various community building strategies (but I think there is a decent chance that once I've found a way to clearly express what I initially thought, it will be obvious to me that I was wrong, so for now, I'm not really trying to say "this is what an Australian group should do, I'm more just trying to work out, "if this model was correct, what should an Australian group do".
I can totally imagine that after getting over the fact that it was your idea and not mine (so maybe give it a month or so), I might just end up deciding that the vibes model is superior to anything discussed here (though of course, current me's inside view is that that isn't the case or I'd just update straight away)!
According to current me. Who knows what the past me that wrote this stuff to begin with thought? I'll answer my own rhetorical question: you'd hope I would! My stab at a guess is that my views are becoming somewhat more coherent and also somewhat less confident with time as I think about this more (so you are currently dealing with moving goalposts, sorry!).
Basically, I'm not yet really trying to draw a map that I'm confident in, I'm more just trying to find a really good legend for many maps in order to then draw out the current main hypothesis maps with, hopefully, more precision and less confusion).
Also, in my mind, vibe management comes under the same cluster of communication-related things (overlapping categories include: framing/language/presentation and PR/marketing etc) that people are generally pointing to in these conversations. You seem to just be saying that the diagnosis is too broad and that there is a subset of stuff that is the only problem and it all comes under "vibes", which I disagree with (eg. see this comment and this comment for examples of something else I am trying to point to that doesn't seem to be captured by "vibes" which I think are the sorts of examples posts like this one and this one[1] are trying to point at)
This comment also is, AFAICT, pointing to something true too -- I think there is some narrow window in between that is maybe too high a standard but possibly worth aiming for (I'm yet to confidently decide) that we need some clear language to discuss in such a way that everyone is on the same page and can more easily find all the various relevant cruxes (if such language exists, not sure if my language will end up being that useful - but even if not, I think if there is such language that isn't that hard to find, then it could be useful for facilitating double-crux finding!)
The reputation of the effective altruism society on each campus seems incredibly important for the "effective altruism" brand among key audiences. E.g. Future Deepmind team leaders could come out of MIT, Harvard, Stanford etc.
Are we doing everything we could to leave people with an honest but still good impression? (whether or not they seem interested in engaging further)