I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
If your employer/ manager/ funder/ relevant people said something like: ‘We have full confidence in you, your job is guaranteed and we want you to focus on whatever you think is best’ - would that change what you focus on? How much?
My personal impression is that significant increases in unrestricted funding (even if it were a 1-1 replacement for restricted funding) would dramatically change orgs and individual prioritisations in many cases.
To the extent that one thinks that researchers are better placed to identify high value research questions (which, to be clear, one may not in many cases), this seems bad.
Reading the examples of negative CCIs (e.g. below) makes me think that one of the most informative kinds of future research would be assessing the frequency all events of this kind across all EAs, and assessing whether they differ across Western and non-Western EAs. Based on my own experience, I would expect both Western and non-Western EAs to experience similar events near-constantly, both within EA and without. So it seems it seems like a core crux is whether they occur more frequently or severely in either group / when different groups interact / in some particular setting rather than another.
When they went in the wrong direction, someone yelled the right direction to them in a way that felt infantilising and demeaning.
When they were inside the afterparty space, no one seemed interested in engaging with them, so they left early.
In particular, it seems to me that the closer to the core people are, the less inclined they are to identify themselves with EA. What’s going on here? I don’t know, but it’s an interesting trailhead to me.
I share this impression. Also, we see that satisfaction is lower among people who have been in EA longer compared to newer EAs (though this is not true for self-reported engagement), which seems potentially related. Note that we would expect to see pressure in the opposite direction due to less satisfied people dropping out over time.
I think this implies that there is a substantive non-quirky effect. That said, I imagine some of this may be explained by new EAs simply being particularly enthusiastic in ways which explain stronger identification with EA and higher satisfaction.[1]
One dynamic which I expect explains this is the narcissism of small differences, as people become closer to EA, differences and disagreements become more salient, and so people may become more inclined to want to distance themselves from EA as a whole.
I'm not suggesting any particular causal theory about the relationship between satisfaction and identification.
I wonder if the predictors are conflating EA's pressures can cause neurotic symptoms with EA attracts relatively more people with neurotic temperaments.
My default assumption would be that we're measuring trait-neuroticism rather than just temporary, locally caused anxiety. That's partly because personality traits are relatively stable, but also because I'd be surprised if EA were having a large effect on people's tendency to describe themselves as being "anxious, easily upset" etc. (and that doesn't seem to be the case in our results on the effect of EA on mental health). Of course, it's also worth noting that our results in this study tended towards lower neuroticism for EAs.
I do think that whether the results are driven more by EA selecting for people who are higher in emotional stability at the outset, or whether the community is losing people with higher trait-neuroticism, is a significant question however. I agree that we couldn't empirically tackle this without further data, such as by measuring personality across years and tracking dropout.
Thanks Cameron!
One speculation I wanted to share here regarding the significant agreeableness difference (the obvious outlier) is that our test bank did not include any reverse-scored agreeableness items like 'critical; quarrelsome', which is what seems to be mainly driving the difference here.
Yeh, I agree! And I think that the pattern at the item level is pretty interesting. Namely, EAs are reasonably 'sympathetic, warm', but a significant number are 'critical, quarrelsome'. As I noted in the post, I think this matches common impressions of EAs (genuinely altruistic, but happy to bluntly disagree).
I wonder to what degree in an EA context, the 'critical; quarrelsome' item in particular might have tapped more into openness than agreeableness for some—ie, in such an ideas-forward space, I wonder if this question was read as something more like 'critical thinking; not afraid to question ideas' rather than what might have been read in more lay circles as something more like 'contrarian, argumentative.'
It's an interesting theory! Fwiw, I checked the item-level correlations and the correlations between the reverse-coded agreeableness item and the two openness items were both -0.001.
This is pure speculation, but in general, I think teasing apart EAs' trade-off between their compassionate attitudes and their willingness to disagree intellectually would make for an interesting follow-up.
Agreed. My own speculation would be that EAs tend to place a high value on truth (in large part due to thinking it's instrumentally necessary to do the most good). It also seems plausible to me that EA selects for people who are more willing to be disagreeable, in this sense, since it implies being willing to somewhat disagreeably say 'some causes are much more impactful than others, and we should prioritise those based on deliberation, rather than support more popular/emotionally appealing causes'.
Thanks for the comment! Your post was a key impetus for us prioritising publishing these results.
To add some specifics to my earlier comment, if we look at the confidence intervals for the effect sizes in terms of cohen's d (visualizer), we see that:
Interpreting effect sizes is, of course, not straightforward. The conventional standards are somewhat arbitrary, and it's quite widely agreed that the classic cohen's d benchmarks for small/medium/large effect sizes are quite conservative (the average effect size in psychology is no more than d=0.3). This empirically generated set of benchmarks (if you convert r into d), would suggest that around 0.2 is small and 0.4 is medium. But whether a particular effect size is practically meaningful varies depending on the context. For example, a small effect may make very little difference at the individual level, but make large differences at the aggregate level / in the long-run.
In my personal view, very few or none, if you are looking at the association between personality and outcomes. As we note the associations between personality, donation behavior and cause prioritization were "only small". I think that in itself is an important finding, since some people would expect large influences on things like donation or cause prioritisation.
If you're talking about the differences in personality between EAs and the general population, I think these are potentially more practically significant (for example, if you're considering things like influences on outreach / recruitment, or the influence of these differences on the kinds of people EA attracts). Here even small differences could be significant in a non-linear way (for example, if EA is disproportionately appealing to people who are very high in need for cognition, or similar traits, this could have a big effect). Some of these apparent differences between EA and the general population are not obviously small, though the level of confidence we can have for the analyses where we can benchmark to population levels at both the gender and age level (i.e. big 5) do not give us clarity about what the exact magnitude of the differences are. Even with 1600 respondents, the sample size is not so large when you are accounting for age and gender in this way (and we argue that you do need to do this in this case for the results to be interpretable).
Perhaps Uhlman et al (2015) or Landy & Uhlmann (2018)?
From the latter: