I’m interested in having a better sense of what new kinds of projects should be set up within the EA community. I think I tend to bias towards scepticism, and so find it easier to get a sense of what worries me about projects than which projects I’m excited about. I thought I’d have a go at writing out a few ideas which seem promising to me. I’d love to hear people’s views on them, and also to read other people’s lists. To provide a nudge towards others producing such lists, I’ve also shared some of the prompts I used to come up with the thoughts below.
I haven’t put a lot of time into this list, so I’m not suggesting any, let alone all, are great ideas - they’re just ones I’d be interested to hear more discussion around. I’m also biased by the corners of EA and the world I’ve spent most time in, for example academia.
Prompts for ideas
Aside from ‘what could we do with more of in EA?’, here are some of the specific questions I considered:
How do we win?
Along with: How are we currently falling short on that?
This is a different way of asking what our theory of change as a movement is, and what part of that theory of change currently seems weakest.
For example: I think one way we could make the world far better in decades’ time is by making it the case that all major decision makers (politicians, business leaders etc) use ‘will this most improve wellbeing over the long run?’ as their main decision criterion. Something which would make that most likely to happen is having EA ideas discussed in courses in all top universities. That led me to wonder whether we’re currently neglecting supporting and encouraging lecturers to do that.
What have I wanted from EA (but not gotten)?
For example: The UK government discussed the possibility of folding the Department for International Development into the Foreign and Commonwealth Office, and subsequently did so. DfID, in addition to having an extremely important mission, was achieving that mission pretty well: It had a reputation for being unusually evidence-based amongst development agencies. I had the general sense that the merger would be bad, and would redirect money from trying to help those in the poorest countries to pursuing British interests abroad. But I didn’t have good evidence about whether it would be good or bad overall, or an idea of what I should do about it if it was bad (write to my local MP? Sign a particular petition?). It’s possible I simply missed the work that was done on this (there certainly is some EA work adjacent to this).
What I’d have liked was:
- A succinct summary of what seemed good and bad about the change to give me an idea of whether I agreed with it.
- A really clear action plan if I wanted to help in some way. That might include, for example: sample letters to send to your MP, some considerations on what makes letters to your MP more/less likely to succeed (are emails better than physical letters, or vice versa?), a link to where you can find out who your local MP is and what the best way to contact them is.
What problems have others experienced in EA?
For example: People often appreciate being surrounded by like-minded people. That’s one benefit people often seek from working at an organisation which explicitly identifies as EA. Another possible benefit is a clearer sense that you’re probably heading in the right direction. That comes from others with the same goals as you being able to give you frequent feedback on your direction. But almost all of the impactful positions in the world are at organisations which don’t identify as EA. So it’s important for us to find ways to make sure that wherever they work, people can still have a sense of being often around people with similar values and who help them figure out their path.
Don’t make the perfect the enemy of the good
I find it hard to think about what projects EAs might work on, because of the pressure of needing to work on the thing that will help people most, rather than simply something which has a good shot at helping people some. That pressure becomes more pronounced when I think about how that time could be spent earning money to buy bed nets or deworming medicine. But I think that pressure is ultimately counterproductive, because I think we’ll only be able to do the best we can if we consider a broad array of options and think about them carefully.
Possible gaps
In academia
Supporting teaching of effective altruism at universities:
Teaching courses at universities varies widely, and sometimes there’s little flexibility - for example where the person teaching doesn’t set the exam. Even when I’ve been teaching a class of that type, the resources used by different tutors varied, and I was grateful for people having made different reading lists because they varied in difficulty and emphasis. But often there’s great latitude - either to teach a course entirely of your own design following your interests, or to teach a class requested by students, where the level of generality is something like ‘a course on bioethics’.
There are already a number of syllabi and reading lists on effective altruism out there, as well as this teaching resources database from St Andrews. But I wonder if it would be useful for there to be a point person who had experience lecturing on these topics and who was keen to field queries and generally help people find the best resources and ways of teaching them. That person might have a sense of which guest lecturers would be good fits for complementing the class. They might keep track of which existing syllabi / reading lists suit what types of classes, so that when someone is thinking of teaching on this it’s as easy as possible to find the materials that are suited to the situation.
Something I think would be particularly useful is helping people think through which topics in ethics and bioethics courses seem more and less important to cover, and where to put the emphasis. When teaching ethics, particularly applied ethics, I found it tempting to focus on issues that are known to be contentious and seem interesting to debate (such as abortion and euthanasia). Those aren’t actually the ones that seem most important to have gotten my (Oxford) students thinking about. More important was what proportion of your income should you donate if you end up in the richest 10% of the UK. I would have appreciated seeing more examples of applied ethics courses with more focus on topics like the latter. (One caveat here is that I only taught as a grad student, so it’s very plausible others have less use of support in this vein than I would have.)
Setting up academic institutes
The Global Priorities Institute is set up to do theoretic research into how to do the most good, particularly in philosophy and economics. So far, it seems to be the only center aimed squarely at that. It seems to have done an excellent job attracting world class philosophers, but found it slower going hiring economists. That’s likely in large part due to founder effects of being set up by philosophers. But another problem is plausibly that Oxford is far better ranked globally for philosophy than for economics. Setting up a global priorities center at a top economics university seems like a hard project, but one that seems great if it succeeded. I expect it would require someone with links to the institution and an economics background, as well as solid buy in from at least one established economist there. These are pretty specific constraints. On the other hand I felt wholly unqualified when I started working with Hilary to set up the Global Priorities Institute at Oxford. It ended up only taking about two years, requiring quite a bit of perseverance and a willingness to ask for a lot of guidance from a multitude of people.
I also wonder whether it would be useful to have academic institutes set up by EAs in disciplines such as psychology and history. It seems like an important advantage to be able to hire researchers into posts where they can spend all their effort on research rather than on administration or on teaching standard courses. It also seems useful to allow researchers to feel a bit less beholden to the standard moulds for their discipline, whether that be by method (for example writing and publishing papers individually rather than collaboratively) or by publication subject matter.
Policy
Advising on civic action
I’m not a big fan of following current affairs in detail because it takes so much time and attention. But I do feel fairly strongly about taking part in democratic decision making when there are effective ways of doing so, like voting in a general election. I rely on various EA friends to help me figure out when there are significant occasions for taking part, and in those cases what I should do (for example here’s a guide to EU referendum campaigning from years ago which made it far easier for me to take part). I’d love for there to be more EA guides out there on things like ‘how much you should care about DfID merging into the FCO, and what you might do about it’ - preferably in as concrete and user-friendly a format as possible.
Translating research into action
My impression is that EAs tend to be rather more inclined to research than action, and that one result of this is there being more theoretic arguments and academic papers than fleshed out policy recommendations. That’s more the case in some places than others. For example in the UK there is an All Party Parliamentary Group for Future Generations which seems to be doing good work. Developing policy recommendations seems both hard to get right and sensitive. But it does seem very important.
Philanthropy
My perception is that when it comes to donations, the EA community has historically focused most on long-term pledges (most notably Giving What We Can, for which the main pledge is lifelong). That seems sensible for a number of reasons, including the amounts of money involved and creating a culture of seriousness about helping others.
But it might also be useful to do more experimentation with different ways of encouraging giving. That may be hard to do for an organisation which is all about a lifelong pledge, so it could be useful to have others doing it. For example, when you inherit money might be a good time to make a significant donation: if the money isn’t part of your usual revenue stream, you might not need all of it. And you might want to honour the person you inherited from by helping others in their name. Yet doing so might be kind of tricky: you might be making a more substantial donation than you have in the past, and so want more information about where to donate and how to do so effectively, without knowing where to go for that. Providing support for people in those situations could be pretty useful.
Relatedly, it might be useful to have some easy way for someone who’s about to make their yearly donation to chat to another person about it. I find it kind of hard to know where I should donate and useful to chat to others about it, particularly if they’re considering similar donation targets to me. On the other hand it might be challenging to set this up, because talking to a stranger seems pretty aversive. Or it could be that people already use things like the EA London community directory to find someone to talk to about their donation decisions if they want that.
Other EA endeavours in this area are Momentum (which aims to integrate giving seamlessly into life) and fundraising experiments by Charity Science.
Cause- or sector-specific community builders
Figuring out what’s most impactful is really hard. It seems especially difficult and lonely when you’re not surrounded by others aiming for the same thing. I liked Charity Entrepreneurship’s idea of starting a non-profit to support people doing what they called ‘earn to give plus’ - where the ‘plus’ was things like learning communication skills and training others in EA in them. Alongside EtG+, I think it would be great if people could figure out how to best influence important companies towards doing good. That might mean encouraging large pharma companies to increase their in-kind donations of treatments for neglected tropical diseases, or working on improving recommender systems at top tech firms.
I could imagine the kind of support CE envisions being useful for people in a wide variety of areas. It could be cool to have a point person for an area who does things like: chats to people considering moving into that area (to help them decide), regularly checks in with people working in the area (to support them in their journey), and connects people who could productively collaborate. There seem to be people playing these types of roles in some parts of EA, but I expect we could do with more.
One problem with area specific community building is that in order to be taken seriously and know enough to be helpful to people, you might yourself need to be doing object level work in the area. In that case you might have rather little time for community building. Another challenge is that these kinds of activities might particularly benefit from someone doing it long term (so that all the people in an area are aware that they’re the point person, and know them well enough to be in regular contact with them, for example). That takes time to build up, and is demanding for the person involved.
I share the view that this seems potentially really valuable. Anecdotally, I know an EA who seems like they could do well in roles at EA orgs, or could potentially rise to fairly high positions in government roles in a country that's not a major EA hub. There are of course many consideration influencing their thinking about which path to pursue, but one notable one is that the latter just understandably sounds less fun, less satisfying over the long term, and more prone to value drift.
I think efforts to address this issue might ideally also try to address the issue that status, validation, etc. within the EA movement are easier to access by working at EA orgs than at other orgs, and probably especially hard to access by working at orgs outside the major EA hubs (e.g., a key department of a government agency in an Asia country rather than in the UK or US).
We tried to brainstorm some ideas for how EA in general could support people like this EA I know to happily pursue roles that where (by default) there'd be no EAs in their orgs and maybe only a few in their city/country as well. Some (not necessarily good) ideas, from memory:
Some (also not necessarily good) ideas that come to mind now:
Re: Make status easier accessible
One idea that just came up to me was making it easier to reap status benefits from the GWWC giving pledge, e.g. I feel kind of proud of seeing my name on this huge numbered list and being among the first ten thousand people to sign. Relatedly, Subreddits and Wikipedia Projects seem to actively use badges of honor to acknowledge things like being a donor, having helped with some task etc. Maybe we could have „Pledge“ badges.
Another idea: getting access to people one holds in high regard could also be something to think about. One could promote speakers coming to local groups, or generally promote networking within the community more.
Another thought that came up: Not being chosen for 80,000Hours‘ career coaching felt like it was a symptom of my relatively low value for the community (not saying there is room for improvement how they communicated that, was years ago). I imagine it feels similar for some others. Maybe having motivated volunteers taking up the rejected applicants would be a cheap way to signal „there are people in the community that value you being here and trying to work out an EA career path“?
That resonates with me.
And the mention of Wikipedia is interesting. When I was a pretty active Wikipedia editor, I indeed felt proud of and motivated by badge-type things (mainly "barnstars", if I recall correctly), as well as by random people thanking me for contributions (either by clicking a button or by posting on my talk page).
I'd guess a lot of EAs have similar mindsets, motivational patterns, etc. to a lot of Wikipedia editors, so it does seem like it could be interesting to try to learn from how Wikipedia "recruits", motivates, and retains editors.
Could you expand on what you mean by "Maybe we could have „Pledge“ badges"? E.g., where are you envisioning those badges being displayed? Are you envisioning them just being for taking the pledge, or also for other actions (e.g., recording donations, hitting some milestone in donations, being in the first 10,000 members, a badge another pledger can give you to say you helped them decide where to give...)?
(Your other ideas also seem potentially interesting, but I don't have anything in particular to say about them :) )
I thought about people's forum accounts. There are also the EA hub accounts, but I basically never open it, not sure about others. I'd probably do it similar to Wikipedia (e.g. here), just having a small icon for the pledge and when you hover on it "GivingWhatWeCan member since April 2nd, 2020". I didn't think about other ideas, e.g. being helpful for a person deciding on a donation! I like the idea. One worry that comes up is that it could get a bit cluttered. Also, something in me feels a bit awkward when proudly displaying something, like I could become the target of the bullies of my highschool for feeling "too cool". The GWWC pledge is already so socially accepted as something cool that I don't feel this in that case.
Yeah, I think this idea - and other things in the same neighbourhood - is worth considering.
One thing worth mentioning is that GWWC already have badges you can display on websites, as well as Facebook photo frames. (This is where I found them.) So I think the intervention here wouldn't be creating them, but rather:
I think it could be worth talking to people like Luke Freeman (who's head of GWWC) and/or Aaron Gertler (the lead Forum moderator) about this.
See also the post EA jobs provide scarce non-monetary goods, which probably influenced the views I expressed here but which I'd forgotten about till recently.
I was just re-reading the transcript of the 80k interview with Ben Todd from November 2020 and saw that that includes a section that's relevant to what I was saying here, which I'll quote below in case it's of interest to any future readers:
(I'd heard that episode back in November 2020, so it may have been one of many influences informing my comment.)
I also made a tag this morning for posts relevant to Working at EA vs Non-EA Orgs (and tagged this post), so readers interested in this topic may be interested in those posts as well.
An impression after skimming this post (not well thought through; do point out what I missed):
Some of the tentative project ideas listed are oriented around extending EA's reach via new like-minded groups who will share our values and strategies.
Sentences that seemed to be supporting this line of thinking:
I'm unsure how much I misinterpreted specific project ideas listed in this post.
Leaving that aside, I generally worry about encouraging further outreach focused on creating like-minded groups of influential professionals (and even more about encouraging initiators to focus their efforts on making such groups look 'prestigious'). I expect that will discourage efforts in outreach to integrate importantly diverse backgrounds, approaches, and views. I would expect EA field builders to involve fewer of the specialists who developed their expertise inside a dissimilar context, take alternative approaches to understanding and navigating their field, or have insightful but different views that complement views held in EA.
A field builder who simply aims to increase EA's influence over decisions made by professionals will tend to select for and socially reward members that line up with their values/cause prio/strategy as a default tactic, I think. Inversely, taking the tactic of connecting EAs who like to talk with other EAs who are climbing similar career ladders leads to those gathered themselves agreeing to and approving each other more for exerting influence in stereotypically EA ways. Such group dynamics can lead to a kind of impoverished homogenisation of common knowledge and values.
I imagine a corporate, academic, or bureaucratic decision maker getting involved in an EA-aligned group and consulting their collaborators on how to make an impact. Given that they're surrounded by like-minded EAs, they may not become aware of shared blindspots in EA. Conversely, they'd less often reach out and listen attentively to outside stakeholders who can illuminate them on those blindspots.
Decision makers who lose touch with other important perspectives will no longer spot certain mistakes they might make, and may therefore become (even more) overconfident about certain ways of making impact on the world. This could lead to more 'superficially EA-good' large-scale decisions that actually negatively impact persons far removed from us.
In my opinion, it would be awesome if
Some reasons:
At this stage, I would honestly prefer if field builders start paying much deeper attention to 2. before they go out changing other people's minds and the world. I'm not sure how much credence to put in this being a better course of action though. I have little experience reaching out to influential professionals myself. It also feels I'm speculating here on big implications in a way that seems unnecessary or exaggerated. I'd be curious to hear more nuanced arguments from an experienced field-builder.
Yeah, I agree that there would be significant benefits to trying to set up another academic research institute at a university more focused on economics.
Same here.
The idea of "academic institutes set up by EAs in disciplines such as psychology and history" also sounds potentially exciting to me. And I wrote some semi-relevant thoughts in the post Some history topics it might be very valuable to investigate (and other posts tagged History may be relevant too).
Agreed. The University of Chicago — with its Becker Friedman Institute, Center for Decision Research, broad EA community, and generous economics funders — could be a promising option.
Definitely agree with this, as someone currently at UChicago! The Center for Radical Innovation for Social Change (RISC) recently put out a call for animal welfare proposals and Steve Levitt has connections to Schmidt Futures (an EA-adjacent philanthropic initiative), so that could be a promising place to start.
Thank you for sharing! I hadn't looked deeply into RISC's work before — and very helpful to know about Levitt's ties to Schmidt Futures.
This seems like a good idea to me. And the second idea seems to me like a potential Task Y, meaning something which has some or all of the properties:
Relatedly, that second idea also seems like something anyone could just start and provide value in right away - no need for permission, special resources, or unusual skills. (My local EA group actually discussed similar things previously in the context of climate change, and took some minor actions in this direction.)
I've just made a shortform post on Some ideas for projects to improve the long-term future. I brainstormed the ideas before seeing this post, but this post is part of what prompted me to share the ideas publicly. And the shortform is only moderately rather than massively long, so I'll copy the whole thing below rather than just linking to it. (Maybe that's a bit weird? If so, sorry!)
---
In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. I’m also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck(s) to executing them are the right person/people, buy-in from the right existing organisation, or funding.
I’m not expecting to execute these ideas in the near-term future myself, so if you think one of these ideas sounds promising and relevant to your skills, interests, etc., please feel very free to explore the idea further, to comment here, and/or to reach out to me to discuss it! [If commenting, please comment on the shortform version of this, so centralise discussion there.]
See also:
The views I expressed here are my own, and do not necessarily reflect the views of my employers.
Thanks for this post! I think I basically share the view that all of those prompts are useful and all of those "gaps" are worth seriously considering. I'll share some thoughts in separate comments.
(FWIW, I think maybe the idea I feel least confident is worth having an additional person focus ~full-time on - considering what other activities are already being done - is creating "some easy way for someone who’s about to make their yearly donation to chat to another person about it.")
Regarding influencing future decision-makers
Both of those claims match my independent impression.
On the first claim: This post using neoliberalism as a case study seems relevant (I highlight that mainly for readers, not as new evidence, as I imagine that article probably already influenced your thinking here).
On the second claim: When I was a high school teacher and first learned of EA, two of the main next career steps I initially considered were:
I ultimately decided on other paths, partly due to reading more of 80k's articles. And I do think the decisions I made make more sense for me. But reading this post has reminded me of those ideas and updated me towards thinking it could be worth some people considering the second one in particular.
I feel quite good about the ideas in this section - I'd definitely be excited for one or more things along those lines to be done one or more people who are good fits for that.
Some of those activities sound like they might be sort-of similar to some of the roles people involved in other EA education efforts (e.g., Students for High-Impact Charity, SPARC) and Effective Thesis have played. So maybe it'd be valuable to talk to such people, learn about their experiences and their perspectives on these ideas, etc.
Misc small comments
This does seems like a good idea to me, but I think Generation Pledge might already be doing something like that? (That said, I don't know much about them, and I don't necessarily think that one org doing ~X means no other org should do ~X.)
Also, for people thinking about this broader idea of potentially setting up pledges (or whatever) that cover things GWWC isn't designed for, it may be useful to check out A List of EA Donation Pledges (GWWC, etc).
I know very little about Animal Advocacy Careers, but this sounds like the sort of thing they might do? And if they don't do it, then maybe they could start doing so for the animal space (which could be useful directly and also could provide a model others could learn from)? And if they raise strong specific reasons to be inclined against doing that (rather than just reasons why it's not currently their top priority), that could be useful to learn from as well.
Yeah, I think it'd be pretty terrible if people took EA's focus on prioritisation, critical thinking, etc. as a reason to not raise ideas that might turn out to be uninteresting, low-quality, low-priority, or whatever. It seems best to have a relatively low bar for raising an idea (along with appropriate caveats, expressions of uncertainty, etc.), even if we want to keep the bar for things we spend lots of resources on quite high. We'll find better priorities if we start with a broad pool of options.
(See also babble and prune [full disclosure: I don't know if I've actually read any of those posts].)
(Obviously some screening is needed before even raising an idea - we won't literally say any random sequence of syllables, and we should probably not bother writing about every idea that seemed potentially promising for a moment but not after a minute of thought. But it basically seems)
I also think charity science might have tried getting people to pledge in their wills.
A long quibbly tangent
I'd say there’s a >50% chance that this would indeed be good, and that it’s plausible it'd be very good. But it also seems to me plausible that this would be bad or very bad. This is for a few reasons:
[The above statements of mine are pretty vague, and I can try to elaborate if that’d be useful.]
So I'd favour thinking more about precisely what sort of changes we want to make to future decision-makers’ values, reasoning, and criteria for decision-making, and doing so before we make any major pushes on those fronts.
And that generic "more research needed" statement, I'd favour trying to package increases in consequentialism and generic altruism with more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, and probably some other things like that.
The following posts and their comment sections contain some relevant prior discussion:
...but, I think all of this might be pretty much just a tangent. That’s because I think we could just change the sentence of yours that I quoted at the start of this comment to make it reflect a broader package of attributes we want to change in future leaders, and your other points would still stand. E.g., teaching at universities could try to inculcate not just consequentialism and generic altruism but also more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, etc.