DM

David_Moss

Principal Research Director @ Rethink Priorities
7659 karmaJoined Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team. 

The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.

The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
548

Thanks for clarifying!

I do continue to worry a bit about self-fulfilling prophecies. If EA organizations make it disproportionately easy for people prioritizing certain causes to engage (e.g. by providing events for those specific causes, or by heavily funding employment opportunities for those causes) then I think it becomes murkier how to account for weighted cause prioritization because cause prioritization is both an input and an output.

I share this concern about weighting community views by engagement. That said, it seems plausible to me that the engagement-weighted views of the community at the least selected for [the set of views predominant among EA leadership] out of the options presented. True, CEA (and their donors, respected people who have thought about cause prioritisation a lot) can influence the views of highly engaged EAs in various ways. But I would expect CEA staff, donors, and select experts to be more strongly selected for a narrower set of views.

Thanks!
 

Do you think it can answer the question; If you're the first in your family to switch to veganism/vegetarianism, how likely is it that another family member will follow suit within X years?

I'm afraid that. I think we'd need to know more about the respondents' families and the order in which they adopted veganism/vegetarianism to assess that. I agree that it sounds like an interesting research question!

When we asked current vegetarians/vegans what influenced them to adopt their diet, personal conversations (primarily with friends/family) were among the top influences.

So your surprise/expectation seems reasonable! Of course, I don't know whether it's actually surprising, since presumably whether anyone actually converts depends on lots of other features of a given social network (do your networks contain a lot of people who were already vegetarian/vegan?).

Thanks Zach. Like others, I'm excited to see that CEA will continue to take a principles-first approach to EA.

There's one point I'd be interested in you saying more about. In the post you express qualified support for CEA's cause prioritization being influenced by CEA's staff, CEA's funders and "people who have thought a lot about cause prioritization," but reject the idea that CEA should "mirror back the cause prioritization of the community as a whole."

I'm curious whether this means only that you reject the idea that CEA's cause prioritization should be entirely based on the unweighted views of the community, or whether you think that the weighted views of the community (giving more weight to those who have thought about cause prioritization more) should at least somewhat influence CEA's decisions, or somewhere in between.

Thanks, makes sense!

In that case, dropping those two items, the responses seem pretty coherent, in that you can see a fairly clear pattern of support for ~ policy and think tanks, ~ outreach, and ~ technical work cohering together.[1] I think this is reassuring about the meaningfulness of people's responses (while not, of course, suggesting that they got the substantive values right).

  1. ^

    The exact results vary depending on the nuances of the analysis, of course, so I wouldn't read too much into the specifics of the results above without digging into it more yourself, though we found broadly the same pattern across a number of different analyses.

I see a couple of the questions have a lot of missing data concentrated at the start of the dataset (e.g. "Fund Edith for one year", "Improve big company safety orientation 5 percentage points"). Is there a particular reason for that, i.e. the questions were added to the survey part way through, after some respondents had already taken it? (This influences how we should interpret the missing data).

 

do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing? Or should we take this causal explanation to be, in effect, a debunking explanation of why many people are unreasonably opposed to EA (and to goal-directed ethics more generally)?

 

We discuss this in our preprint

We find that people evaluate those who deliberate about their donations less positively (e.g. less moral, less desirable as social partners) than those who make their donations based on an empathic response. But a possible explanation of this response is that people take these different approaches to be signals about the character of the other person:

Namely, donating empathically may signal that one has good moral character and is a valuable social partner, because reacting empathically communicates an inclination to help those in need and a reliable motivation to behave prosocially. Supporting this, research has found that people infer that those who rely on emotion are more likely to cooperate and are more likely to feel emotions like empathy (Levine et al., 2018). Additionally, research has shown that donors who experience greater empathy are perceived to have a better moral character, and that this effect is reduced when the emotion felt does not lead to prosocial behavior (Barasch et al., 2014).

In contrast, deliberating about cost-effectiveness may be perceived as a weaker indicator of prosociality, as it suggests that donors are motivated more by pragmatic considerations than by concern for recipients’ feelings. As a result, deliberative donors might withhold assistance in situations where the aid is not deemed cost-effective enough, despite a compelling emotional appeal from the individual in need. This could lead observers to infer that deliberative donors are more cold, calculating, and pragmatic, with weaker commitment to interpersonal relationships. Similarly, research on judgments of individuals who make consequentialist decisions—such as helping a greater number of strangers rather than a single family member—indicates that they are less favored as partners in close relationships (e.g., friend, spouse) and are perceived as less loyal (Everett et al., 2018). Moreover, research has found that helping strangers instead of close others (e.g., friends, family) is deemed morally unacceptable and may have negative relational consequences (Law et al., 2022; McManus et al., 2020).

I think this suggests that individuals may have good reasons for their negative evaluations, as people who deliberate about the cost-effectiveness of their aid may be less likely to provide aid in the kinds of typical cases which people normally care about, than someone who aids due to an empathic response (e.g. they may be less likely to help the person themselves or someone close to them if they are in need). But, of course, this doesn't show that deliberators are worse, all things considered, so I think this remains quite viable as a debunking explanation.  

I think the survey team didn't do a per capita visualisation because response rates will probably vary a lot between countries for reasons other than the number of EAs per capita. 

 

Yeh, as we note here:

We have reported this previously in both EAS 2018 and EAS 2019. We didn't report it this year because the per capita numbers are pretty noisy (at least among the locations with the highest EAs per capita, which tend to be low population countries).

The last time we reported this was 2020, with the caveat that "Iceland, Luxembourg and Cyprus, nevertheless have very low numbers of EA (<5) respondents. This graph doesn't leave out any countries with particularly high numbers of EAs, in absolute terms, though Poland and China are missing despite having >10."

 

We'll discuss the details more in the post we are putting together on this (hoping to release this month), but there is indeed quite a lot of noise when you look at EAs per capita, and in particular the highest EAs per capita countries, due to small populations and small numbers of respondents, often close to zero (e.g. small countries can jump in and out of the top rankings based on having 5 or 0 respondents in a year). In the full post we'll additionally examine results for composites of years (e.g. 2020-2022), and which countries outperform what a model would predict (though that will be heavily caveated).

It’s sometimes useful to ask the advisee to come to the call with a list of questions or topic prepared. If you’d like us to ask all advisees to do this, just let us know!


I personally find it extremely useful when people provide questions beforehand, even if it's just a couple of bulletpoints. But I've also found (in contexts other than this[1]) that sometimes asking people to send a few bulletpoints is too high a barrier to entry and they just won't do that. So I'd suggest making this a suggestion, rather than something that sounds more like a requirement.

  1. ^

    e.g. when people are requesting surveys.

I agree that sometimes you won't know whether people think positively or negatively of something (particularly if we're thinking about individual interactions). But I think very often people will have a good sense of this (particularly if we're thinking about the aggregate effect), and often people will be quite explicit about this.

Load more