Background
In September 2021, we announced the EA Market Testing team (see our page for more details).[1] We requested feedback, in the post, or in a specific Feedback Form (LINK), stating:
...our mission is to identify the most effective, scalable strategies for marketing EA and EA-aligned activities using rigorous testing approaches (as well as surveys, profiling and meta-analysis). Your ideas about ‘what to test’ and ‘how to test it’, as well as feedback on our current plans, will be immensely valuable. ... We want your opinions, impressions, and experience. [2]
We received 17 responses.[3] We read through all of the suggestions, and they have influenced our work. We have also communicated with some of the suggestors.
But I think it's also worth summarizing and reporting on this.[4] Below, I use some Chat GPT (and a human assistant) to aid this. Not (only) to be lazy, but because it provides a certain neutrality to the summarization; if the AI robot is doing this, it's hard for me to impose my own bias.[5][6]
Results
Respondents' backgrounds
Most respondents had been involved in EA for a long time (see below). Furthermore, 10/17 work/had worked in marketing, at an EA organization, or (in 2 cases) in non-EA fundraising.
What questions ...
To start off, we asked a deliberately open-ended question. We wanted to get a sense of people's un-prompted interests and understanding. We asked:
What questions do you think we should answer with our research? Do you have any 'burning questions' or ideas? What do you want to learn?
15/17 people responded to this question.[7]
Human summary
I asked an assistant[8] to summarize the responses to this question,[9] which I report below.
The responses ... can be broadly categorized into six overarching themes.
- Feasibility of marketing of EA principles or EA philosophical ideas; EA branding issues. (Examples... [10])
- Optimizing the messaging to the existing EA community[11]
- ... Using mainstream social media marketing tools to market EA to general audiences
- Defining the ideal or desirable target audience for growing the existing community. (Examples: [12]( (Other examples*[13])
- Costs [and scaleability] of attracting new EAs and EA awareness. (Examples... [14])
- Tractability issues.(Example: [15])
Adapted GPT3 summary
We asked everyone's favorite chatty robot to "Give a detailed summary of the following responses..." This yielded the following (broken into paragraphs, with some comments, cuts and emphasis added). It picked up several themes discussed above.
There is a desire to understand the extent to which the philosophical principles behind EA can be marketed and how to induce a positive inclination towards utilitarian and optimization-based giving. ... ... whether there is a difference in the likelihood of people taking significant action (e.g. changing careers) based on their age
Indeed several respondents mentioned the profiling and segmentation issue, i.e., who would be more inclined to go EA, what the psychometric profiles are, and how to tailor marketing differently for different groups.
(Other themes also echoing the human summary: [16])
Some distinct themes were picked up by the chatbot:
Other questions include: ... the psychological traits of effective altruists and how EA messaging can be tailored to these traits...
... how to make EA messaging more engaging and relevant to current affairs trends, and how to effectively advertise EA in a way that is consistent with the strategies of different EA-related organizations and prompts continuous redefinition of [doing the most good based on reasoning].
... understanding the motivations and needs of people who donate to charities and how to engage them in EA, as well as the potential for using celebrities as spokespeople for EA.
Finally, there are questions about how to effectively communicate the value of long-term global catastrophic risk mitigation and how to balance the need for transparency and honesty in EA messaging with the potential for causing alarm or despair.
I also want to highlight a particularly interesting response to the project as a whole, asking
... whether any messaging leads to increased engagement with EA.
This is an ongoing concern for us.[17] We are pivoting somewhat, to focus more to focus on testing ambitious interventions, occurring at pivotal decision points in the EA/effective giving process,[18] where substantial outcomes are traceable and attributable. We are also emphasizing measuring the impact 'levers' that EA organizations are particularly uncertain about 'which is better'; i.e., where we anticipate a high value of information.
What messaging...
We asked:
Do you have any specific ideas for approaches to messaging or targeting people who might be receptive to EA?
Some example responses, excerpted:[19]
Messaging content, EA benefits
-
"Appealing to a person’s wish to perform and be the source of good and impressive outcomes."
-
"Highlighting hard questions to bring in intellectually curious people. Flatter people a bit more?"
-
"Non-elitist thought leadership with watered-down or populist-inflected EA messaging"
-
"For college students, market really cool way of hanging out with persons upon the pretext of doing some projects."
-
Rational (rather than emotional) arguments, positive framings, global health as the best 'on-ramp' cause
Targeting particular and nontraditional audiences
- "Messaging to progressives should likely center privilege rather than impact."
- "I would be interested if there's research into the assumptions we have about people "who might be receptive to EA'. ... Maybe it's not so difficult to get 'grandmother-who's-never-left-her-home-town' or 'mate-in-the-pub' to become interested and active, if things are presented in the right way for them and they feel like they can engage."
- "For high school students, share Peter Singer’s work (Practical Ethics or perhaps The Expanding Moral Circle) for analysis."[20]
Other avenues and outreach methods
- Offer scholarships, and market them
- "Pitch people on a different or more general concept whose acceptance precedes EA acceptance. Perhaps pitching rationalism, altruism, consequentialism, utilitarianism is an easier or more effective sell, and naturally leads to [an] increase in EA adherents indirectly."
- Use Quora
"What 'EA messaging' questions do you think we should answer with our research?"
Some key themes and interesting specifics:
-
Sources and demographics of "skepticism towards EA principles in the population"
-
"Which EA stances and people associated with EA are most and least popular across different segments of the population?"
-
"Art and emotion, as well as social media advertising, may be effective in marketing EA."
-
Particular concepts from mainstream marketing and fundraising, such as 'social proof', 'everyday heroes', a 'storytelling framework', and 'comparison to other expenses (or savings)'. It would be helpful to know what factors influence whether or not people engage with EA messaging and how to keep them engaged.
Whom to target
Whom should we target and how do we identify them? Which groups do you suspect might be particularly amenable to EA-aligned ideas/actions, or worth exploring further? Which individual and group traits and characteristics would be particularly interesting to learn more about? What approaches to targeting might we take?
Summarizing the responses (direct cut and paste from GPT):[21]
Some suggested groups to target for effective altruism include people younger than 30, tech startup workers and English speakers in mainland Europe, professors, NGO staff seeking funding, funders of non-effective projects, older people interested in philosophy, bureaucrats, academics, relatively less affluent people in developed countries, data-oriented young people in California, conservative people in the US, decision-makers in DC, and people interested in animal welfare. Additionally, some suggested targeting people with an interest in systemic change, people who can influence large entities, people who are educated and present a good image, people with innovative insights and connections to decision-makers, and people who are interested in efficiency, coordination, innovation, and effectiveness. Others suggested targeting people through book clubs, online events, and niche social media platforms...
Other target audiences mentioned were 'the cryptocurrency movement' and 'people interested in adjacent topics'.
My summary: There was a lack of consensus. There seems to be (the usual?) division between 1. 'target young/tech/elite groups we see as high-value for direct-work on existential risks' and 2. 'reach out to new audiences in new ways'.
Risks, pitfalls, precautions
The final specific question[22] asked:
What down-side risks (if any) are you concerned about with our work? Do you have any advice as to what we should be very careful about, what to avoid, etc?
Here, many of the (12) responses emphasized methodological issues and pitfalls to the credibility and relevance of our research at EAMT. Issues raised:
- Focusing on the more measurable outcomes (e.g., small donations) at the expense of more important outcomes (e..g, major career-plan changes and major changes in mindsets)
- or focusing on non-actionable questions[23]
- Limited generalizability across contexts
- Related: the risk that limited pieces of evidence will be taken as 'the ultimate truth'
- Difficulty of testing within a 'very large space of possibilities'
- Attribution issues when there are multiple channels
- Unreliability of existing work in 'data-driven marketing'
Several people mentioned reputation hazards from doing the marketing, including
- Unilateralists curse, bad actors, people 'hijacking' the movement for specific causes, new adherents with 'value drift'; leading to locking-in a negative public image of EA
- Treating EA as a corporate product, leading people to misinterpret it or lose respect for it
Only one person mentioned 'direct' risks (e.g., from encouraging people who might become potential terrorists to work in biosecurity research).
To keep me slightly more honest, I put the ChatGPT summary of these responses below, with details in footnotes[24]
Here are some potential themes that could be extracted from the responses:
Note to people who participated/further feedback
- If you want to be acknowledged here, please leave a comment.[28]
- We are still interested in feedback and suggestions. The feedback form is still live. Although our priorities have changed somewhat (as suggested above and here), the questions are largely similar. If you wish to fill out the form, please feel free to answer only one, or a subset of the questions; there is no need to repeat yourself across boxes.
(Aside on participation and the future of EAMT[29])
As of 4 Jan 2023, 1090 unique devices viewed the post; 110 for longer than 5 minutes. ↩︎
Continuing ... "We have a few questions with prompts for open-ended answers. We do not expect you to answer all of the questions. But please do enter any relevant thoughts, opinions, and ideas. You can do so anonymously or leave your information if you would like us to contact you and/or acknowledge your ideas." ↩︎
In addition to comments to the post itself. ↩︎
I originally wrote 'I feel obligated' but that seems like counter-optimizing sunk cost fallacy or emotionally driven. ↩︎
Update: the I had a hard time automating this and ended up doing a lot by hand, so biases may remain. ↩︎
It also might help to anonymize responses and make the writing style consistent. While all but one of the respondents left their identifying information, it is not clear if they wanted their responses publicly attributed. ↩︎
Technically 14/16, as one person responded twice. ↩︎
An EA-aware but not aligned friend ↩︎
Again, an external classification, to avoid my own potential biases. ↩︎
What are the best books to attract new EA’s? Questions to do with brand stereotyping such as 'EA is dull' and 'EA is elitist' and 'how do we deal with that in our marketing'? ↩︎
(as well reaching out to ‘natural EA’s’ that aren’t part of the existing community yet) ↩︎
"Age - demographics - is money or skill more urgent?" "What 'segmentation model' to consider?" ↩︎
These latter examples assume a target demographic, but I think they raise the issue of what this demo is. "How to market EA at start-up workplaces?" "Brand awareness at universities" ↩︎
What does a 1% increase of EA awareness at a top 10 university cost? What are the costs of attracting 1 new EA; what is the conversion rate of specific marketing tools? ↩︎
"How many people take significant action after 3 months of a single EA exposure?" ↩︎
"the 'scaling laws' of outreach (i.e. the cost in terms of time and money to increase awareness of EA among a certain percentage of a specific university), the level of brand awareness of EA among undergraduate students at top universities ... how to effectively market EA in startup workplaces. " ↩︎
While previous and ongoing trials to show different response rates to different messages, my impression is that these differences are subtler than we might have expected. Furthermore, it has been difficult to track the longer-term and more meaningful differences between people approached with different messaging. Where we have done so, the (somewhat preliminary) results suggest a lack of a consistent difference. ↩︎
E.g., at the point of signing up for a giving pledge, or perhaps enrolling in a fellowship ↩︎
With some help from GPT ↩︎
Continuing... "For [anyone other than?] high-school students write some questions that allow the critical thinking about the frameworks that are used in texts and other media and institutions." ↩︎
I looked this over and it seems accurate. Note at this point, I had to restart a new chat GPT session, as it was mistakenly using too much of the previous context in its answers. Also note there were many very detailed responses I would like to engage with further. ↩︎
There was also an open-ended 'further comments' box ↩︎
Note: our framework and approach does heavily emphasize the idea of 'how would the results inform relevant choices'. ↩︎
Note: these footnotes include some apparent misinterpretation or misclassification by the AI. ↩︎
"This theme includes concerns about using psychological tricks to induce compliant behavior, over-emphasizing marketing efforts on lower impact topics, and the risk of the movement being hijacked for foreign causes or corporate or political purposes." ↩︎
"This theme includes concerns about attracting the attention of terrorists or risky groups, and the potential loss of funding or support from major donors, influencers, or important persons due to manipulation, evidence-based solutions not being the best, or the risk of compromising personal projects." ↩︎
This theme includes concerns about introducing any new weapons and the advice to be careful about recommending non-proliferation work only to those who are a good fit for it. It also includes suggestions to focus on issues such as lethal autonomous weapons and nuclear safety, and to consider supporting medical laboratory research safety and pandemic prevention and preparedness efforts. ↩︎
Does it feel weird to say 'I want to be acknowledged?' It shouldn't, but as cover for being accused of seeking glory, it's probably helpful for others to know who you are. ↩︎
If you would like to work on a specific project with the EAMT, or to take on a particular role or responsibilities (that aligns with your skills), please reach out to me via DM. I strongly believe that the EA and effective giving community can and should continue to coordinate, research, and communicate our results in an organized and efficient way. While our initial grant is ending, we are applying for further funding, and considering other approaches to funding and coordinating efforts between organizations and individuals in this space. I hope to post more about this soon. ↩︎
Wonderful write-up, David. Thank you for this!
Could you expand on what you think is meant by: Unreliability of existing work in 'data-driven marketing'?
One specific comment, others that echo this, and ~my own experience:
Someone with experience in email marketing found "most of the insights from data-driven analysis to be unreliable." ... many reports on 'best time of day to send' contradict one another.
A large multi-dimensional space of choices to explore, many combinations to test, hard to know how to generalize.
Tracking and attribution issues
Ad-optimization algorithm (e.g, Facebook) are not transparent and can be confusing to draw inferences from
I think it's partly a reaction to for-profit marketers overselling themselves, and partly a statement of 'this being a hard task in general.