Lucius Caviola

1098 karmaJoined New York, NY, USA
luciuscaviola.com

Bio

I research the psychology of effective altruism and longtermism.

Comments
15

Thanks Ben!

13.6% (3 people) of the 22 students who clicked on a link to sign up to a newsletter about EA already knew what EA was.

And 6.9% of the 115 students who clicked on at least one link (e.g. EA website, link to subscribe to newsletter, 80k website) already knew what EA was.

Another potentially useful measure (to get at people’s motivation to act) could be this one:

“Some people in the Effective Altruism community have changed their career paths in order to have a career that will do the most good possible in line with the principles of Effective Altruism. Could you imagine doing the same now or in the future? Yes / No”

Of the total sample, 42.9% said yes to it. And of those people, only 10.4% already knew what EA was.

And if we only look at those who are very EA-sympathetic (scoring high on EA agreement, effectiveness-focus, expansive altruism and interest to learn more about EA), the number is 21.8%. In other words: of the most EA-sympathetic students who said they could imagine changing their career to do the most good, 21.8% (12 people) already knew what EA was.

(66.3% of the very EA-sympathetic students said they could imagine changing their career path to do the most good.)

A caveat is that some of these percentages are inferred from relatively small sample sizes — so they could be off.

We've asked them about a few 'schools of thought': effective altruism, utilitarianism, existential risk mitigation, longtermism, evidence-based medicine, poststructuralism (see footnote 4 for results). But very good idea to ask about a fake one too!

(Note that we also asked participants who said they have heard of EA to explain what it is. And we then manually coded whether their definition was sufficiently accurate. That's how we derived the 7.4% estimate.)

We considered this too. But the significant correlations with education level and income held even after controlling for age. (We mention this below one of the tables.)

I see that it may seem surprising at first glance that education doesn't correlate positively with our two scales. (Like David, I am not sure if the negative correlation will hold up.) It seems surprising because we know that most existing highly engaged EAs are highly educated (and likely have high cognitive abilities). But what this lack of positive correlation shows is simply that high education (and probably also high cognitive abilities) is not required to intuitively share the core moral values of EA.

As we point out in the article, there are likely several additional factors that predict whether someone will become a highly engaged EA. And it's possible that education (and likely high cognitive abilities) is such an additional, and psychologically separate, factor.
 

Just to add to what David said: It's difficult to say  whether our NYU business sample or our MTurk sample is more representative of our primary target audience. The best way to find out is to do a large representative survey, e.g., amongst students at a top uni (of all study subjects - not just business).

 

Yes, it was initially quite surprising that so many donors are willing to support the matching system. We found similar results when we tested it with MTurk participants (who were given a small bonus which they could give or keep; see Study 7). One possibility is that it's a kind of intergenerational reciprocity tendency, where people who benefited from the generosity of previous donors want to pay it forward to the next ones.

Thanks!

Perhaps, but we are uncertain. It depends on whether we can find a scalable strategy for reaching donors who are amenable to EA but not yet engaged with effective altruism. Such a strategy might come from paid advertising, further earned media coverage (our strategy so far), or from the formation of institutional relationships (e.g. with businesses, universities, or wealth managers) who offer guidance or incentives for charitable giving.

Yes, we've recently introduced our donors to GWWC. (Results of that campaign are not in yet.)

Thanks, Linch.

First, you’re right that several EA psychology researchers are studying how people donate to charity. But most of them (including myself) are also studying other EA-related topics, such as the psychology of xrisk and longtermism, moral attitudes towards animals, etc. My hunch is that only a minority of currently ongoing EA psychological research projects have charitable giving as their primary topic of interest.

Second, as David pointed out, donation choices are a useful behavioral outcome measure when studying the public’s beliefs, attitudes and preferences about EA related issues more generally. In many cases, the goal of the research is not necessarily to understand how people donate to charity specifically but to understand the fundamental psychological drivers of and obstacles to EA-aligned attitudes and behavior more generally (example). Studying these in the context of charitable giving is an obvious and often straightforward first step — in the hope that these insights can be generalized.

For example, the fact that people are willing to split their donation, as described in the post, tells us something more fundamental about people’s preferences structure (the fact that most people value effectiveness but only as a secondary preference), the potential market size of EA in the general public, and possible routes for reaching a wider adoption of EA ideas. Another example is the study of individual differences: who are the people who immediately find EA ideas appealing, where can we find them and how should we target them? It’s natural to test this, in part, by observing people’s donation choices.

My view on prioritization is that psychological research can be useful when it yields such fundamental insights. But there can also be really useful applied research, such as marketing or psychometric research that can be practically useful for recruitment.

I don't think our findings suggest that people have a preference for populations with higher variance in welfare (i.e. greater differences in how how happy they are). All else equal, people probably have a strong preference for fair welfare distribution (even in the US). But sometimes they may choose the option that contains more welfare variance because this population has a higher average or total level (or for some other reasons).

I agree with you that it would be very interesting to do a cross cultural study. I don't have a specific hypothesis about cross cultural differences though. Note that there already exists some cross cultural research on fairness and prosocial behavior.

Thanks, these are great points!

As for your first question about the philosophical implications of this psychological research: In general, the primary goal of our project was a descriptive one and it would require a separate project (ideally lead by philosophers) to figure out what the possible normative implications are.  I also believe that we need much more empirical research to understand in greater detail what exactly the psychological mechanisms are that drive people's population ethical views. I see this as a very first exploration.

That said, I agree with much of what Jack says in the other comment. We should be cautious in simply accepting lay people's intuitive reactions to these tricky moral dilemmas or even making our policies based on them. Most people's reactions are very uninformed (most have never thought about these questions before), their reactions are often inconsistent, framing-dependent and — as we saw in some of our studies — people themselves tend to revise their opinions after more careful reasoning. 

At the end of our paper, we say:

However, this [the fact that people's judgments are inconsistent and biased] does not mean that it is not valuable to examine lay people's population ethical intuitions. Population ethics has important implications for policy making and global priority setting. Philosophers often rely on their own intuitions when discussing population ethics. An understanding of the psychology of these population ethical intuitions can therefore be informative. For example, greater awareness of the specific psychological mechanisms and biases driving these intuitions could elucidate which ones should be endorsed under reflection and which ones not. The apparent inconsistencies between some of these intuitions demonstrate that it may be impossible to formulate a population ethical theory that is both consistent and intuitive (cf. impossibility theorems; Arrhenius, 2000). One possible solution could be a debunking approach: attempting to understand the psychological underpinnings of different philosophical positions, with an eye to identifying those that result from unreliable or biased cognitive processes. This in turn allows the resolution of inconsistency by discounting certain intuitions as untrustworthy (cf. Greene, 2014). Another possible resolution is to accept the fact that we are internally conflicted and, as a consequence, uncertain which moral theory is right (MacAskill, Bykvist, & Ord, 2020). 

As for your second question about the adding-people experiment (Studies 2a-b): You are right that participants may misinterpret our dilemmas and questions. This is a general issue with studying such abstract questions and we tried our best to make things as clear as possible to people. In most studies, for example, we double checked if people understood and accepted our assumptions (and excluded participants from the analyses who have failed these checks).

In Studies 2a-b, the question we asked was "In terms of its overall value, how much better or worse would this world (containing this additional person) be compared to before?" (1 Much worse - 7 Much better). Even though this seems pretty clear to me, I think you're right that it's possible that some participants also considered the indirect effects on other people it would have to add a new person.  One reason why I believe our finding would largely stay the same, even if we ensured that participants did not take the indirect effects into account, is the empty world condition in Study 2b. (And this relates to your comment.)  In Study 2b, we indeed had a condition where the initial world contained zero people (empty world) and another condition where the initial world contained 10 billion people (full world). And even in the empty world condition, where you'd expect such indirect-effect considerations to be ruled out, we still find the same pattern. (That being said, I believe it's possible that a different question and different framing could yield different results.)

Regarding your comment, let me clarify: in Study 2a, the initial world contained 1 million people, but in Study 2b we tried to replicate this effect with a scenario where the initial world contained either zero people or 10 billion people. I believe this should be described correctly in the paper (if not, please let me know). But I noticed that there was an incorrect paragraph in our supplementary materials, which may have lead to this confusion and which I've now fixed (Thanks for making me aware of it!).

Load more