I recently learned about the distinction between "movement building" and "community building": Community building is for the people involved in a community, and movement building is in service of the cause itself.
A story I've heard from a bunch of EA groups is that they start out with community building. They attract a couple people, develop a wonderful vibe, and those people notoriously slack on their reading group preparations. Then, the group organizers get dissatisfied with the lack of visible progress on the EA path, doubt their own impact, and pivot all the way from community building to movement building. No funny pub meetups anymore. Career fellowships and 1-on-1s all the way.
I think this throws the baby out with the bathwater, and that more often than not, community building is indeed tremendously valuable movement building, even if it doesn't look like that at first glance.
The piece of evidence I can cite on this (and indeed cite over and over again) is Google's "Project Aristotle"-study.
In Project Aristotle, Google studied what makes their highest-performing teams highest-performing. And alas: It is not the fanciness of degrees or individual intelligence or agentyness or any other property of the individual team members, but five factors:
"The researchers found that what really mattered was less about who is on the team, and more about how the team worked together. In order of importance:
- Psychological safety: Psychological safety refers to an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive. In a team with high psychological safety, teammates feel safe to take risks around their team members. They feel confident that no one on the team will embarrass or punish anyone else for admitting a mistake, asking a question, or offering a new idea.
- Dependability: On dependable teams, members reliably complete quality work on time (vs the opposite - shirking responsibilities).
- Structure and clarity: An individual’s understanding of job expectations, the process for fulfilling these expectations, and the consequences of one’s performance are important for team effectiveness. Goals can be set at the individual or group level, and must be specific, challenging, and attainable. Google often uses Objectives and Key Results (OKRs) to help set and communicate short and long term goals.
- Meaning: Finding a sense of purpose in either the work itself or the output is important for team effectiveness. The meaning of work is personal and can vary: financial security, supporting family, helping the team succeed, or self-expression for each individual, for example.
- Impact: The results of one’s work, the subjective judgement that your work is making a difference, is important for teams. Seeing that one’s work is contributing to the organization’s goals can help reveal impact."
What I find remarkable is that "psychological safety" leads the list. While some factors in EA actively work against the psychological safety of its members. To name just a few:
- EA tends to attract pretty smart people. If you throw a bunch of people together who have been used all their lives to being the smart kid in the room, they suddenly lose the default role they had in just about any context. Because now, surrounded by even smarter kids, they are merely the kid. I think this is where a bunch of EAs' impostor syndrome comes from.
- EAs like to work at EA-aligned organizations. That means that some of us feel like any little chat at a conference (or any little comment on the EA Forum or our social media accounts) also is sort of a job interview. First impressions count. And, having to perform every day all day is a recipe for burnout and losing trust in the community.
- The debate around weirdness points. Some people have an easy time fitting in; I personally am so weird in so many ways that over the last years, thinking about weirdness points has caused me a whole lot of harm, while providing little value. Essentially, it has made me way more unagreeably weird, by moving me much of the way from charismatic weird to socially awkward weird. I think weirdness points are a pretty useful concept. But I think introducing it in a community with a relatively high ratio of neurodivergent people without doing a whole lot for increasing psychological safety alongside is essentially spreading an infohazard.
- Some EAs' push towards EA exclusively being a professional network. While some versions of this make a whole lot of sense (like strongly discouraging flirting at conferences), other versions disincentivize the warm and trusting conversations that are so necessary not only for building psychological safety, but also for building the personal ties that allow us to do hard things, and for having the big-picture conversations that help us clarify our values.
That's why in practice, my movement building currently consists of a whole lot of community building, and why I continually push back against peoples' suggestions to focus more on professional-type outreach.
I have done occasional interest-gauging on career reflection groups over the last months, and I have encouraged other community members to organize their own ones. But the interest was always too little for anything to happen. What Berlin EAs both fresh and old seem to want, so far, is to read on their own, and to attend and organize socials where they can vibe with other EAs about what they read and the meaning of life.
My biggest surprise in this regard was an EA LGBTQ+ meetup I kicked off recently, mostly as a fun project. And alas: The interest was astounding. Not only could I delegate the task of organizing it even before the first edition happened[1]; at least two of the ~14 people at our first LGBTQ+ meetup had never been to another EA Berlin event before.
I'm not yet sure why socials and rationality skill trainings appear to be everything the Berlin crowd wants. There might be a good deal of founder effect at play and I just don't hear enough about other community members' needs. I'm still poking for what else could provide value. (So - if you are a Berliner: Suggestions welcome in this anonymous feedback form.)
But so far, my community building seems to create a wonderful container that enables and encourages people to do their own movement building, however they see fit. Through meetups that are *their* ideas, through meetups I kicked off and they gladly take responsibility for, through self-organized co-working. And, by suddenly texting me at midnight, "What is your take on how much AI safety field building is still needed, and what does the funding landscape look like? Given the urgency of the task, it seems insane that not more is happening. I might want to start organizing retreats. Can you tell me what's needed for that?"
If you get people to feel safe and comfortable and trust their talents around one another, suddenly a whole lot of magic starts to happen. People find themselves to be smarter and wiser and more powerful than they ever thought, and your movement starts building itself.
So don't neglect community building, even if your grant description looks like movement building all the way.
- ^
For comparison: That took three iterations for our Animal Advocacy meetup.
This is my versy first comment/post on the EA forum, so feedback is very appreciated. :)
Thanks for sharing your thoughts and the google study. When going through the google study, I followed a link to this TEDx talk (https://www.youtube.com/watch?v=LhoLuui9gX8) by Amy Edmondson about building a psychologically safe workplace.
At min 10:17 she mentions something that I found particularly interesting. She describes two different dimensions - psychological safety and motivation&accountability - and distinguishes 4 different zones: an apathy zone where both dimensions are low, a comfort zone with high psychological safety but low motivation/accountability, an anxiety zone with low pschological safety and high motivation/accountability and an (optimal) learning zone where both dimensions are high.
I resonated with the idea of these 4 different zones and could imagine that the phenomenon you described at the very beginning of the post (grat vibe, people slacking off on the reading) might be a group tending towards the comfort zone.
Love it! That bit slipped my mind and seems like a super relevant addition. Thanks a lot.
Severin - thanks for an interesting post, and a useful distinction between community-building and movement-building.
I would just offer a couple of cautions about the Google Project Aristotle, which has many of the same methodological, statistical, and theoretical problems of other social psychology and organizational psychology research that I'm familiar with, that try to assess the 'key ingredients' of successful groups.
First, when Google finds (as summarized by you) that 'It is not the fanciness of degrees or individual intelligence or agentyness or any other property of the individual team members, but five factors:....', it's crucial to bear in mind that Google is one of the hardest companies in the world to work for. They're incredibly selective about the general intelligence, conscientiousness, and social skills of their employees. Basically, their workers are all pretty 'close to ceiling' in terms of the cognitive and personality traits that predict work success. So, lacking much variance in those traits (e.g. a severe 'restriction of range' with respect to individual intelligence), of course they'll find that differences in individual intelligence across groups don't predict much (although it's not clear to me they actually assessed this quantitatively in their study.)
Second, there are strong ideological and theoretical biases in organizational behavior research to show that group organization, norms, management, and corporate culture have big effects on group efficiency, and to downplay the role of individual differences (e.g. intelligence, Big Five personality traits, empathizing vs. systemizing styles, etc). This bias has been a problem for many decades. For example, intelligence researchers tend to find that the average level of individual intelligence within a group has a huge effect on group effectiveness, and that group-level dynamics (such as 'psychological safety') aren't as important. Likewise, personality psychologists often find that the average levels of Conscientiousness, Agreeableness, Emotional Stability, and Openness within a group strongly predict group-level dynamics.
It's also worth noting that some of the things that sound like group-level dynamics are actually aggregates of individual traits -- e.g. 'dependability' is basically a sum of individual levels of the Big Five trait Conscientiousness, and 'structure and clarity' ('An individual’s understanding of job expectations, the process for fulfilling these expectations, and the consequences of one’s performance') sounds heavily dependent individual levels of general intelligence.
So, overall, I would just suggest some caution in how we interpret studies down by organizational psychologists inside large corporations -- especially if the corporation is extremely selective about who they hire. (Counterpoint: many EAs might be about as smart as Google employees, so the restriction-of-range issue might apply as strongly within EA as within Google.)
Thank you so much for your comments and your expertise on this subject.
Some mild-moderate disagreements I have with your claims/framing in your comment above (which should be understood locally as disagreements with this comment, not an endorsement of the original post or Project Aristotle):
I have a few disagreements here. The first is as you say, both EA-in-practice and Google are quite selective on some traits like intelligence. So to the extent that Google's data is only useful in a range-restricted setting, they should be somewhat applicable to EA as well.
FWIW it is not at all ex ante obvious to me that (eg) variation in intelligence is less predictive of work output than group dynamics, for employees above Google's bar (certainly I've heard many people argue the opposite).
Secondly I'm anecdotally pretty skeptical that Google is selective on social skills of their employees, particularly among the engineers. In my time there(2018-2019), I didn't notice people being particularly astute or sociable, compared to a less restricted sample like random Uber drivers. I expect there's some truth to classical stereotypes of engineers/software developers.
I'm also somewhat skeptical that they select heavily on conscientiousness in the Big Five sense of hardworkingness or orderliness. In Silicon Valley, there are a lot of memes and jokes about how nobody gets any work done in Big Tech in general and Google specifically. Obviously exaggerated, but certainly there's some truth to it. (Also they couldn't be that selective on hard work since they hired me lol).
I think what I say above is noticeably more true for American employees than immigrant ones, at least anecdotally. Also, there are good theoretical reasons for this. (I expect second-gen people to be somewhere in between).
I don't know if you intended your comment this way, but of course your critique for ideological or incentive biases of organizational behavior research should also apply to intelligence and personality researchers (we have straightforward theoretical reasons to think intelligence researchers will overstate the impact of intelligence on work performance, personality researchers will overstate the impact of personality traits, etc). So a bias argument unfortunately isn't enough for outsiders to judge the truth here without digging into the details. :/
Linch - these are mostly fair points.
Insofar as both Google and EA are quite range-restricted in terms of people having very high intelligence, conscientiousness, & openness, organizational behavior lessons from the former might transfer to the latter, if they're empirically solid.
I would quibble with your claim that Google doesn't select much on 'conscientiousness in the Big Five sense of hardworkingness or orderliness'. I think there's a tendency of people who are moderately high on conscientiousness to compare themselves to the most driven, workaholic, ambitious people they know, and to feel like they fall short of what they could be doing.
However, everyone who's taught students at a large state university (like me), or managed a business, or hired contractors to renovate a house, will be better-calibrated to the full range of conscientiousness out there in the population. Compared to that distribution, I would bet that almost everybody selected to work at Google will be in the top 10% of conscientiousness -- if only because they're often selected partly for achieving high GPAs at elite universities where you have to work pretty hard to get good grades.
Also, computer programming requires a very high level of conscientiousness. The six main facets of conscientiousness are 'Competence, Order, Dutifulness, Achievement Striving, Self-Discipline, and Deliberation'. Serious coding requires basically all of these -- whereas incompetent, disorganized, slapdash, unmotivated, undisciplined, and unthoughtful programmers simply won't write good code. This also fits with Simon Baron-Cohen's observation that coders tend to be much stronger on 'Systemizing' than on 'Empathizing' -- and systemizing seems closely related to detail-oriented conscientiousness.
Yep, all of those are valid points. Thanks!
Thanks for writing this! I think there's something here, but some grumbles about Project Aristotle:
First, a naïve interpretation of the results is something like "psychological safety causes you to make more money" but that's not what they were actually measuring. From this page:
So they are defining employee satisfaction and culture as "success", which is fine, but makes the results close to tautological.[1]
Secondly, as far as I can tell they have never released any quantitative results. My (perhaps cynical) interpretation is that this is because the results were not impressive.
Thirdly, when people have attempted to do quantitative analyses they have mixed results. E.g. this meta-analysis of healthcare psychological safety studies finds that only 9/62 published papers found a significant effect of psychological safety; given publication bias, this is close to a null result.
Lastly, a near universal problem with regression analyses of this kind is halo effects. If you have an idea that was shot down by your manager but it later turned out to have been a great idea, then your manager "didn't give you space to explore your ideas." But if your idea ex-post turned out to be dumb, then she was giving you "helpful feedback". The apparent explanatory power of "psychological safety" comes from descriptions that are superficially about process actually being descriptions of outcomes. My understanding of the expert consensus is that regression analyses of the type Google is trying to do here are doomed to failure because of confounding effects like this, which makes one wonder about the epistemic environment in which the project was undertaken.
Anyway, reversed stupidity is not intelligence, I do think psychological safety is important and maybe is useful for EA. But I would be hesitant to rely much on Project Aristotle.
They list sales results as one of the four things they measure, but do not provide the weights. My cynical guess is that they found that sales do not correlate particularly well with psychological safety, but I don't think we can say based on publicly available information.
Oh dear. Well, there goes that bit of evidence out the window.
Thanks for writing down your thoughts, Severin!
That was cool to read :) And matches with some of the basic (though not easy) things I've heard again and again from friends who do consulting for the IT teams in big companies: psychological safety is key for a good and productive work environment. But it's easier said than done, and requires a lot of investment into stuff that might appear non-essential.
Great post, and glad to see you find this approach fruitful.
Another example that highlights the distinction you emphasize may be the EA-conference formats and the EA Funconference formats, which are popular in the German community. EA conferences are mostly used to listen to talks and network in 1on1's, which makes them very valuable. They resemble, to me, academic conferences. EA Funconferences are participant-driven events that set little in terms of agenda and resemble a summer camp more than any formal meetup. I found the German EA Funconferences highly valuable for the reasons you describe: participants were comfortable, actively contributed their own content regardless of the level of seniority, and bonded quite a bit.
Despite no formal focus on careers, networking, or current research, I found these events to be more helpful in terms of networking than EA-conferences, which always feel a bit forced and overly formal to me. That preference may be idiosyncratic, but my impression was that most participants loved the Funconferences I visited. I'd love if EA had more Funconferences to augment the more formal conferences.
I am unsure if other countries organize Funconferences, but if not, I'd highly encourage it. Carolin Basilowski has been involved with organizing them in Germany, and I imagine she'd be happy to share her experiences.
Strongly agree!
Actually, the seeds for a bunch of my current knowledge about and approach to community building were sown during various unconferences over the years.
The 2020 Unconference was my first in-person encounter with EA. After my first contact point with EA was reading a bunch of 80k articles which didn't quite seem to have me as part of their target audience, I was very positively surprised by how warm and caring and non-elitist the community was.
I learned to get these things out of EAG(x)s as well. But, had the fancy professional events been my first contact with the community, I might well not be around anymore.
The unconference-format in EA evolved into several directions since its inception. For example, the AI Safety Europe retreat this year was an unconference with a framing that optimized for a clear personal/professional separation. In my impression, it worked wonderfully in that. Not only in regards to combining the flat hierarchies of the format with a professional vibe, but also in regards to connections made. Meanwhile, the German unconferences evolved away from a professional focus, into funconferences into a no longer EA-affiliated summercamp that's completely organized by volunteers and participant-funded.
I started drafting a follow-up to this post with practical suggestions today. Doing more unconferences is on the list.
I wonder if this would work better if it was structured in such a way as to meet people's social needs as well. For example, attending a weekly career reflection group feels like work. On the other hand, attending either a weekend or a week-long reflection retreat with a relatively light schedule feels relaxing.
Thanks! Yep, retreats like that are high-ish on the to-do list.
Thanks for your work, and for sharing your thoughts, that all makes sense to me and I'm glad that you seem to have success in making people feel psychologically safe and encouraged to make their ideas happen! (And thanks for reminding me of the Google study)
Well, we also have a very popular TEAMWORK speaker series, and I'm part of one highly regarded cause-specific dinner networking thing! :P So maybe I'd indeed guess that this is partially a founder effect? Regarding some random ideas...
Thanks! Yep, the "socials is all people want." is a bit of a hyperbole. In addition to the TEAMWORK talks, we also have the Fake Meat - Real Talk reading/discussion group dinners, and will have a talk at the next monthly social, too.
The one-day career workshops sound great, added to the to-do list.
I really want to create an environment in my EA groups that's high in what is labelled "psychological safety" here, but it's hard to make this felt known to others, especially in larger groups. The best I've got is to just explicitly state the kind of environment I would like to create, but I feel like there's more I could do. Any suggestions?
Yep, expectation-setting like that is super valuable.
I've also written a short facilitation handbook a couple months ago. It's useful for meetups, workshops, and basically any other kind of work with groups. Optimizing for psychological safety is implicit in a bunch of things there.
Thank you for writing this up, Severin. I think you're really onto something: EA communities can work as wonderful containers (borrowing your phrase!), as stable launchpads, and as supportive audiences for movement building. Trying to change the world is hard work, and having a community makes it easier.
I'm delighted to hear about your successes - you are empowering your community members to explore their own ideas and to put them into action. That's next-level community building!