Hide table of contents

The 2022 EA Survey is now live at the following link:  https://rethinkpriorities.qualtrics.com/jfe/form/SV_1NfgYhwzvlNGUom?source=eaforum 

We appreciate it when EAs share the survey with others. If you would like to do so, please use this link (https://rethinkpriorities.qualtrics.com/jfe/form/SV_1NfgYhwzvlNGUom?source=shared) so that we can track where our sample is recruited from.

We currently plan to leave the survey open until December the 1st, though it’s possible we might extend the window, as we did last year.  The deadline for the EA Survey has now been extended until 31st December 2022

What’s new this year?

  • The EA Survey is substantially shorter. Our testers completed the survey in 10 minutes or less. 
  • We worked with CEA to make it possible for some of your answers to be pre-filled with your previous responses, to save you even more time. At present, this is only possible if you took the 2020 EA Survey and shared your data with CEA. This is because your responses are identified using your EffectiveAltruism.org log-in. In future years, we may be able to email you a custom link which would allow you to pre-fill, or simply not be shown, certain questions which you have answered before, whether or not you share your data with CEA, and there is an option to opt-in to this in this year’s survey.

Why take the EA Survey?

The EA Survey provides valuable information about the EA community and how it is changing over time. Every year the survey is used to inform the decisions of a number of different EA orgs. And, despite the survey being much shorter this year, this year we have included requests from a wider variety of decision-makers than ever before.

Prize

This year the Centre for Effective Altruism has, again, generously donated a prize of $1000 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.


 

Comments41
Sorted by Click to highlight new comments since:

Yay! Glad you're doing this.

Whenish might the results be available? (e.g. by the new year, or considerably after?)

Thanks!

I'd say it's pretty uncertain when we'll start publishing the main series of posts. For example, we might work on completing a large part of the series, before we start releasing individual posts, and we may use a format this year where we put more results on a general dashboard, and then include a smaller set of analyses in the main series of posts. But best guess is early in the new year.

That said, we'll be able to provide results/analyses for specific questions you might want to ask about essentially immediately after the survey closes. 

@David_Moss

Any updates on when you're able to share results?

Thanks for asking. I'm hoping to begin publishing the series this week. We've been working on the FTX analyses first, since this seemed more time-sensitive.

Another nudge here about the rest of the data :) Thanks for the FTX analyses, those were really interesting and I'm glad you prioritised sharing those.

Thanks for following up! The first EAS post is due out on Monday actually. We've been providing lots of custom analyses to different orgs and movement builders on request though, so feel free to reach out if there are any specific results you need.

That's great, thanks! Excited to read.

cool, thank you!

Small suggestion - could you include some text on the front page about who you think the survey is for (e.g. is it everyone who self-identifies with the term effective altruist? anyone who considers themselves part of the EA community? someone who has read a book / listened to a podcast about effective giving / longtermism / farmed or wild animal welfare?). 

I appreciate that the sampling frame here is extremely difficult and I'm supportive of trying to survey ~everyone of relevance, but the way it's set up now it's not clear to me who you're trying to reach. I can imagine people who you might want to reach and fill the survey out not doing so because of how the landing page is set up. I'd push for an inclusive framing of who you're trying to include. The current page assumes that the reader knows what the "EA survey" is - which is pretty ingroup / assumes a lot.

Thanks for the suggestion! We can certainly add something about this to the landing page. [And have now done so]

I would also note that this text is usually also already included where the survey is distributed. i.e. when the survey is distributed through the EA Newsletter or CEA social media, it will go out with a message like "If you think of yourself, however loosely, as an “effective altruist,” please consider taking the survey — even if you’re very new to EA! Every response helps us get a clearer picture" before people see the survey.  That kind of message didn't seem so necessary on the EA Forum announcement, since this is already a relatively highly engaged audience.

I think there should be more questions under the 'extra credit' section. I was willing to spend more time on this, and I think there are other views I would be interested in understanding from the average EA.

A low effort attempt of listing a few things which come to mind:

  • moral views
  • biggest current uncertainties with EA
  • community building preferences
  • identification with EA label
  • best and worst interactions with EA

Thanks! This is useful feedback.

We'd like to include more questions in the extra credit section, and I agree it would be useful to ask more about the topics you suggest. 

Unfortunately, we don't find that adding more questions to the extra credit section is completely 'free'. Even though it's explicitly optional, we still find people sometimes complain about the length including the optional extra credit section. And there's still a tradeoff in terms of how many people complete all or part of the extra credit section. We'll continue to keep track of how many people complete the survey (and different sections of it) over time to try to optimise the number of extra questions we can include. For example, last year about 50% of respondents started the extra credit section and about 25% finished it.

Notably we do have an opt-in extra survey, sent out some time after the main EA Survey. Previously we've used this to include questions requested by EA (usually academic) researchers, whose questions we couldn't prioritise including in the main survey (even in extra credit). Because this is completely opt-in and separate from the EA Survey, we're more liberal about including more questions, though there are still length constraints. Last year about 60% (900) people opted in to receive this, though a smaller number actually completed the survey when it was sent out. 

We've previously included questions on some of the topics which you mention, though of course not all of them are exact matches:

  • Moral views: We previously asked about normative moral philosophy, metaethics, and population ethics
  • Identification with EA label: up until 2018, we had distinct questions asking whether people could be "described as "an effective altruist"" and whether they "subscribe to the basic ideas behind effective altruism". Now we just have the self-report engagement scale. I agree that more about self-identification with the EA label could be interesting.
  • Best and worst interactions with EA: we've definitely asked about negative interactions or experiences in a number of different questions over the years. We've not asked about best interactions, but we have asked people to name which individuals (if any) have been most helpful to them on their EA journey.
  • Community building preferences: we've asked a few different open-ended questions about ways in which people would like to see the community improve or suggestions for how it could be improved. I agree there's more that would be interesting to do about this.

Thanks for the detailed response. It's great hearing about the care and consideration when forming these surveys!

Given "last year about 50% of respondents started the extra credit section and about 25% finished it", this still feels like free info even if people don't finish. But I guess there are also reputation risks in becoming The Survey That None Can Finish.

I note that previous surveys had some of information I suggested as useful listed, and I think that's why I'd be so excited to see it carried over across the years. Especially with rapid growth of EAs.

I don't feel like any substantial change should be made off my views expressed here, but I did want to iron out a few points to make my feedback clearer. Your point about follow-up surveys probably catches most of my worries about sufficient information being collected. Thanks again David and team :)

How far and wide should people (and especially community builders) spread this and encourage others to fill it in? For example, I could ask people from my local group to fill in the survey, but I don't want to skew the results.

Thanks for asking! We would definitely encourage community builders to share it with their groups. Indeed, in previous years, CEA has contacted group organizers directly about this. We would also encourage EAs to share it with others EAs (e.g. on their Facebook page) using the sharing link. I would not be concerned about you 'skewing the results' by sharing and encouraging people to take the survey in this way, in general, so long as you don't go to unusual lengths to encourage people (e.g. multiple reminders, offering additional incentives to complete it etc.). 

This is not a criticism (think that it's sick you do this survey, and grateful for your work), but I'm curious whether Rethink (or anyone else) has had a go at adjusting for a non-random sample of the total EA community (however that is defined) being made aware of the survey and choosing to participate.

Probably out of scope for this project - and maybe not even that useful - but I wonder whether it could be useful to survey a sample from an actually random frame to just get some approximate idea of the size and demographics of the EA community. That might help inform EA orgs in working out how much to update based on the survey responses.

credence: not a stats guy, maybe this has been done already, maybe it doesn't matter very much, open to being told why this doesn't matter actually.

Thanks! We think about this a lot. We have previously discussed this and conducted some sensitivity testing in this dynamic document.

I wonder whether it could be useful to survey a sample from an actually random frame to just get some approximate idea of the size and demographics of the EA community.

The difficulty here is that it doesn't seem to be possible to actually randomly sample from the EA population. At best, we could randomly sample from some narrower frame (e.g. people on main newsletter mailing list, EA Forum users), but these groups are not likely to be representative of the broader community. In the earliest surveys, we actually did also report results from a random sample drawn from the main EA Facebook group. However, these days the population of the EA Facebook group seems quite clearly not representative of the broader community, so the value of replicating this seems lower.

The more general challenge is that no-one knows what a representative sample of the EA community should look like (i.e. what is the true composition of the EA population). This is in contrast to general population samples where we can weight results relative to the composition found in the US census. I think the EA Survey itself represents the closest we have to such a source of information.

That said, I don't think we are simply completely in the dark when it comes to assessing representativeness. We can test some particular concerns about potential sources of unrepresentativeness in the sample (and have done this since the first EA Survey). For example, if one is concerned that the survey samples a disproportionate number of respondents from particular sources (e.g. LessWrong), then we can assess how the samples drawn from those sources differ and how the results for the respondents drawn from those sources differ. Last year, for example, we examined how the reported importance of 80,000 Hours differed if we excluded all respondents referred to the survey from 80,000 Hours, and still found it to be very high. We can do more complex/sophisticated robustness/sensitivity checks on request. I'd encourage people to reach out if they have an interest in particular results, to see what we can do on a case by case basis.

Edit: 

sorry, I see now that you've discussed the point in my comment below (which I've now put in italics) in the linked document. I'm grateful for, but not surprised at, the care and thought that's gone into this.

If it's not too much of your time, I just am curious about one more thing. Is the paragraph below saying that surveying the general population would not  provide useful information, or is it saying something like 'this would help, but would not totally address the issue'. Like, is there any information value in doing this - or would it basically be pointless/pseudoscientific?

Surveying the general (non-EA) population as part of larger representative surveys to get a sense of the overall composition of EAs (e.g., the gender ratio). However, differential non-response, to these larger surveys would again throw this in doubt. Standard corrections may not be easy: and relative non-response among EAs (e.g., male versus female EAs) may differ from the relative non-response to such surveys in other populations.

***

Original comment:


Thanks for the in depth response, David.

The difficulty here is that it doesn't seem to be possible to actually randomly sample from the EA population

Sorry, I explained poorly what I meant. What I meant to ask was whether you could randomly sample from a non-EA frame, identify EAs based on their responses (presumably a self identification question), and then use that to get some sense of the attributes of EAs.

One problem might be that the prevalence of EAs in that non-EA population might be so minuscule that you'd need to survey an impractical number of people to know much about EAs.

Another response is that it just wouldn't be that useful to know,  although the cost involved in hiring polling companies in a few places to do this maybe is not that much when weighed against the time cost of lots of EAs doing the survey at 10min/response. 

I was a pretty motivated EA (donated, sometimes read EA literature) who did consider myself an EA but was entirely disengaged from the community from 2013-2017, and then barely engaged from 2017-2020. Additionally, when I speak with other lawyers it's not uncommon to hear that someone is either interested in EA or has begun donating to an EA charity, but that they haven't gotten involved with the community because they don't see how that would help them or anyone else do more good.

I don't know how useful you think it would be to know more about the makeup and size of that population of unengaged EAs (or EA-adjacent folk, or whatever the label). Maybe it just wouldn't be very decision-relevant for the orgs who have expressed interest in using the data. My initial sense is that it would be useful, but I don't really know.

Thanks for the comments!

Is the paragraph below saying that surveying the general population would not  provide useful information, or is it saying something like 'this would help, but would not totally address the issue'.

It's just describing limitations. In principle, you could definitely update based on representative samples of the general population, but there would still be challenges.

Notably, we have already run a large representative survey (within the US), looking at how many people have heard of EA (for unrelated reasons). It illustrates one of the simple practical limitations of using this approach to estimate the composition of the EA community, rather than just to estimate how many people in the public have heard of EA. 

Even with a sample of n=6000, we still only found around 150 people who plausibly even knew what effective altruism was (and we think this might still have been an over-estimate). Of those, I'd say no more than 1-3 seemed like they might have any real engagement with EA at all. (Incidentally, this is roughly a ratio that seems plausible to me for how many people who hear of EA actually then engaged with EA at all, i.e. 150-50:1 or less.) Note that we weren't trying to see whether people were members of the EA community in this survey, so the above estimate is just based on those who happened to mention enough specifics- like knowing about 80,000 Hours- that it seemed like they might have been at all engaged with EA). So, given that, we'd need truly enormous survey samples to sample a decent number of 'EAs' via this method, and the results would still be limited by the difficulties mentioned above.

Thanks for taking the time to explain, David!

Great survey design, and very short! I liked that you kept the Extra Credit section short because I was able to easily quickly fill it out.

Thanks so much for doing this! I'm really excited to see the results of the survey and expect it'll be really useful :)

Some nits:

  • I noticed at some points that some questions which asked about the impact of programs had a "None of the above" response, whereas others didn't (My understanding is it's best practice to include a "None" question for all of these? e.g. I noticed the absence of this for the "Which of these had the largest influence on your impact?" question.
  • I found these options confusingly ordered based on the grid multi-select (e.g. for the year when you joined EA, took me a sec to find the right option, and it feels more natural in a grid to read left to right than top to bottom) — did you consider just leaving this as a list?
  • I don't think it's common phrasing to talk about "increasing / decreasing" mental health, maybe I would have phrased it as "had a positive or negative impact on" mental health
  • On "Do you have any suggestions as to how EA could be improved or developed?", I don't have a strong sense of what's being asked here, or what "EA" refers to in this context? Is this about the community at large, specific people within the community, is this still including "adjacent ideas", etc.

Thanks for taking the time to comment!

All of the questions you had substantive comments about the wording of were external requests which we included verbatim.

Re. the order of the dates: when testing, we found some people thought this was more intuitive left to right and some top to bottom (for context, it's not specifically designed as 'grid', it just happens that the columns are similar lengths to the rows). It could be put in a single column, though not without requiring people to scroll to see the full questions, or changing the style so that it doesn't match other questions. Exactly how you see the questions will vary depending on whether you're viewing on PC, phone or tablet though.

WAY too many of the questions only allow checking a single box, or a limited number of boxes. I'm not sure why you've done this? From my perspective it almost never seems like the right thing, and it's going to significantly reduce the accuracy of the measurements you get, at least from me.

An example would be, there's a question, "what is the main type of impact you expect to have" or something, and I expect to do things that are entrepreneurial, which involve or largely consist of communitybuilding, communication and research. I don't know which of those for impact types are going to be the largest (it's not even possible to assess that, and I'm not sure it's a meaningful question considering that the impacts are often dependent on more than one of those factors at the same time: We can't blame any one factor), but even if I did know how to assess that, the second place category might have similar amounts of impact as first, meaning that by only asking for the peak, you're losing most of the distribution.

Other examples, which are especially galling, are the question about religious or political identity. The notion that people can only adhere to one religion is actually an invention of monotheist abrahamic traditions, arguably a highly spiritually corrosive assumption, I'm not positioned to argue that, but the survey shouldn't be imposing monotheistic assumptions.
The idea that most EAs would have a simple political identity or political theory is outright strange to me. Have you never actually seen EAs discussing politics? Do you think people should have discrete political identities? I think having a discrete, easily classifiable political identity is pretty socially corrosive as well and shouldn't be imposed by the survey! (although maybe an 'other' or 'misc' category is enough here. People with mixed political identities tend not to be big fans of political identity in general.)

WAY too many of the questions only allow checking a single box, or a limited number of boxes. I'm not sure why you've done this? From my perspective it almost never seems like the right thing, and it's going to significantly reduce the accuracy of the measurements you get, at least from me.

Thanks for your comment. A lot of the questions are verbatim requests from other orgs, so I can't speak to exactly what the reasons for different designs are. Another commenter is also correct to mention the rationale of keeping the questions the same across years (some of these date back to 2014), even if the phrasing isn't what we would use now. There are also some other practical considerations, like wanting to compare results to surveys that other orgs have already used themselves.

That said, I'm happy to defend the claim that allowing respondents to select only a single option is often better than allowing people to select any number of boxes. People (i.e. research users) are often primarily interested in the _most_ important or _primary_ factors for respondents, for a given question, rather than in all factors. With a 'select all' format, one loses the information about which are the most important. Of course, ideally, one could use a format which captures information about the relative importance of each selected factor, as well as which factors are selected. For example, in previous surveys we've asked respondents to rate the degree of importance of each factor , as well as which factors they did not have a significant interaction with. But the costs here are very high, as answering one of these questions takes longer and is more cognitively demanding than answering multiple simpler questions. So, given significant practical constraints (to keep the survey short, while including as many requests as possible), we often have to use simpler, quicker question formats.

Regarding politics specifically, I would note that asking about politics on a single scale is exceptionally common (I'd also add that single-select format for religion is very standard e.g. in the CES). I don't see this as implying a belief that individuals believe in a single "simple political identity or political theory." The one wrinkle in our measure is that 'libertarian' is also included a distinct category (which dates back to requests in 2014-2015 and, as mentioned above, the considerations in favour of keeping questions consistent across years are quite strong). Ideally we could definitely split this out so we have (at least) one scale, plus a distinct question which captures libertarian alignment or more fine-grained positions. But there are innumerable other questions which we'd prioritise over getting more nuanced political alignment data.

With a 'select all' format, one loses the information about which are the most important

Have you found that people answer that way? I'll only tend to answer with more than one option if they're all about equally important.

You might expect that it's uncommon for multiple factors to be equally important, I think one of the reasons it is common, in the messy reality that we have (which is not the reality that most statisticians want): multiple factors are often crucial dependencies.
Example: A person who spends a lot of their political energy advocating for Quadratic Funding (a democratic way of deciding how public funding is allocated) cannot be said to be more statist than they are libertarian, or vice versa, because the concept of QF just wouldn't exist and cannot be advocated without both schools of thought, there may be ways of quantifying the role of each school in its invention, but they're arbitrary (you probably don't want to end up just measuring which arbitrary quantifications of qualitative dependencies respondants might have in mind today) The concept rests on principles from both schools, to ask which is more important to them is like asking whether having skin is more important to an animal than having blood.

I think one consideration is that they want to make the surveys comparable year to year, and if people can select many categories, that would be make it difficult.

For adding multiple options, I think there's another sense of challenge, where if someone could select different political identities or religions, that would make the result difficult to interpret. It seems sort of "mainstream" for better or worse, that there is one category for some of the things you mentioned.

Zooming out, it seems that instead of seeing things like single/multiple as sort of a didactic/right or wrong choice or trying to impose a viewpoint,  these seem to be design decisions, that is sort of inherently imperfect in some sense, and part of some bigger vision or something.

I think one consideration is that they want to make the surveys comparable year to year

Makes sense. But I guess if it's only been one year, there wouldn't have been much of a cost to changing it this year, or, the cost would have been smaller than the cost of not having it right in future years.

if someone could select different political identities or religions, that would make the result difficult to interpret

Could you explain why? I don't see why it should, really.

Could you explain why? I don't see why it should, really.

Well, in one sense that is shallow, what would an agnostic person + (some other religion mean)? 

Maybe more deeper (?): it seems like some religions like Buddhism, which accepts other practices, would be understood to accept other practices. So it's not clear if a Buddhist who selected multiple options had different beliefs, or was just very being very comprehensive and communicative like a good EA.

Well, in one sense that is shallow, what would an agnostic person + (some other religion mean)? 

Uh that specifically? Engaging in practices and being open to the existence of the divine but ultimately not being convinced. This is not actually a strange or uncommon position. (What if there are a lot of statisticians who are trying to make their work easier by asking questions that make the world seem simpler than it is.)

it seems like some religions like Buddhism, which accepts other practices, would be understood to accept other practices [but not believe in them or practice them?]

That just sounds like a totally bizarre way to answer the question as I understood it (and possibly as it was stated, I don't remember the details). I wouldn't expect a buddhist with no other affiliations to answer that way. I don't believe the ambiguity is there.

"What actions would you like to see from EA organizations or EA leadership in the next few months?" 

  • Pausing new grant investigations 
  • Pausing public outreach and other attempts to grow the movement 
  • Something approximating a formal truth & reconciliation process 
  • More inner work (therapy, meditation, movement practices, self-directed reflection, time in nature, pursuit of Aristotelian leisure especially by working with one's hands) 

I'm curious to read some of the reasoning of those who disagreed with this, as I'm currently high-conviction on these recommendations but feel open to updating substantially (strong beliefs weakly held). 

Will all the results of the survey be shared publicly on EA Forum? I couldn't find mention about this in the couple announcements I've seen for this survey.

It looks like at least some of the 2020 survey results were shared publicly. [1, 2, 3] But I can't find 2021 survey results. (Maybe there was no 2021 EA Survey?)

I can confirm that we'll be releasing the results publicly and that there wasn't a 2021 EA Survey. And you can view all the EAS 2020 posts in this sequence.

Thanks for sharing!

Regarding the question:

Which of the following have, within the last 12 months, had the largest influence on your personal ability to have a positive impact? (Choose up to three options.)

Would video calls count as "Personal contact with EAs", or is this option reserved for non-virtual contact?

Thanks for asking! This is a question requested by another org (in 2019), so I can't give you the definitive authorial intent. But we would interpret this as including virtual personal contact too (not just in-person contact).

Thanks! I also interpreted it that way.

Hi, I hope you do leave this up a week longer at least!

The FTX fiasco meant a lot of people were overwhelmed and are just now getting spoons back for tasks like this. I will be repromoting this survey to the EA Austin community tonight as I think hardly anyone here will have filled it out. Hoping the link is not dead when people click it 😅🙏

Many thanks for checking in and sharing the survey! I can confirm that we're leaving the survey open until the end of the year now, like last year. Please also view this post about the extra questions about FTX that have been added now, and the separate survey people can take with questions about FTX, if they have already completed the survey.

Awesome, thanks!

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities