I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances.
This isn't expressing disagreement, but I think it's also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,
I think these questions are relevant in a variety of ways:
One move which is sometimes made to suggest that these things aren't relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn't suggest that broader public attitudes are not important.
As a practical matter, it's also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes.
We didn't directly examine why worry is increasing, across these surveys. I agree that would be an interesting thing to examine in additional work.
That said, when we asked people why they agreed or disagree with the CAIS statement, people who agreed mentioned a variety of factors including "tech experts" expressing concerns and the fact that they had seen Terminator etc., and directly observing characteristics of AI (e.g. that it seemed to be learning faster than we would be able to handle). In the CAIS statement writeup, we only examined the reasons why people disagreed (the responses tended to be more homogeneous, because many people were just saying ~ it's a serious threat), but we could potentially do further analysis of why they agreed. We'd also be interested to explore this in future work.
It's also perhaps worth noting that we originally wanted to run Pulse monthly, which would allow us to track changes in response to specific events (e.g. the releases of new LLM versions). Now we're running it quarterly (due to changes in the funding situation), that will be less feasible.
Addressing only the results reported in this post, rather than the survey as a whole:
I kind of feel like the most important version of a survey like this would be certain subsets of people (eg, tech, policy, animal welfare).
We agree these would be valuable surveys to conduct (and we'd be happy to conduct them if someone wants to fund us to do so). But they'd be very different kinds of surveys. Large representative surveys like this do allow us to generate estimates for relatively niche subsets of the population, but if you are interested in a very small subset of people (e.g. those working in animal welfare), it would be better to run a separate targeted survey.
Also why didn't you call out that the more people know what EA is, the less they seem to like it? Or was that difference not statistically significant?
("Sentiment towards EA among those who had heard of it was positive (51% positive vs. 38% negative among those stringently aware, and 70% positive, vs. 22% negative among those permissively aware)."
This comparison wouldn't strictly make sense for a few reasons:
I do think it is noteable that sentiment is more postive among those who did not report awareness of EA, and responded to a particular presentation of EA, compared to sentiment among those who were classified as having encountered EA. However, this is also not a straightforward comparison: the composition of these groups is different, and the people who did not claim awareness were responding only to one particular presentation of EA. More research would be required to assess whether learning more about EA leads to people having more negative opinions about it.
I believe all of that is true, but at the same time, I’m almost certain we’ve lost significant credibility with key stakeholders... Friendly organisations have explicitly stated they do not want to publicly associate with us due to our EA branding, as the EA brand has become a major drawback among their key stakeholders
I definitely agree this is true, just not sufficient in itself to mean that movement building for EA is impossible or less viable than promoting other ideas (for that we'd need to assess alternative brands/framings).
Agree that this is likely explained by people thinking they recognise the familiar terms and conflating it with the Humane Society or other local Humane Societies. We didn't include specific checks of real awareness for The Humane League or other orgs and figure on our list, because they weren't key outcomes we were interested in verifying awareness of per se and survey length is limited. They were included primarily to provide a point of comparison (alongside a mixture of fake, real but very low incidence, and real and very common, items), and to allow us another check by assessing whether responses were associated with each other in ways that made sense (i.e. we would expect EA-related terms to show sensible associations with each other, charities in general to be associated with each other, and tech-related items to be associated with each other).
Based on google trends, I'd expect The Humane League to be a bit less well known than GiveWell, and the Humane Society to be much more well known.
Great talk, thanks!
The thing is, broad awareness of EA is still really low—around 2%. This is from research that was done last summer between Rethink Priorities and CEA, and Breakwater. They found even though in specific groups that we care about, like some elite circles, it might be higher on the whole awareness of EA, it’s just still very low.
Agreed with this.
That said, I'd also add that sentiment is still positive even among those who have heard of EA.
Our research on elite university students (unpublished but referenced by CEA here), also found that among those who were familiar with EA, only a small number mentioned FTX.
I was indeed trying to say option a - that There's a "bias towards animals relative to other cause areas," . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that's often impractical and not my point here.
Thanks for clarifying!
Some broader points:
And if the members of the team wanted to work solely on animal causes (in a different position), I think they'd all be well-placed to do so.
That said, I don't think we do too badly here, even in the context of AW specifically, e.g. Bob Fischer has previously published on hierarchicalism, the view that humans matter more than other animals).
I think the possibility that outreach to younger age groups[1] might be net negative is relatively neglected. That said, the two possible reasons suggested here didn't strike me as particularly conclusive.
The main reasons why I'm somewhat wary of outreach to younger ages (though there are certainly many considerations on both sides):
These questions seem very uncertain, but also empirically tractable, so it's a shame that more hasn't been done to try to address them. For example, it seems relatively straightforward to compare the success rates of outreach targeting different ages.
We previously did a little work to look at the relationship between the age when people first got involved in EA and their level of engagement. Prima facie, younger age of involvement seemed associated with higher engagement, though there's a relative dearth of people who joined EA at younger ages, making the estimates uncertain (when comparing <20s to early 20s, for example), and we'd need to spend more time on it to disentangle other possible confounds.
Or it might be that 'life stages' are the relevant factor rather than age per se, i.e. a younger person who's already an undergrad might have similar outcomes when exposed to EA as a typical-age undergrad, whereas reaching out to people while in high school (regardless of age) might be associated with negative outcomes.