This post explains a number of recent changes regarding the organisational structure of the Centre for Effective Altruism (CEA). These are primarily changes to the internal structure of the organisation; for that reason, this post should be primarily of interest to people who follow CEA very closely. However, I also explain briefly how this might bear on the externally-facing activities CEA does in the future.
In summary:
- Up until recently, CEA ran on a ‘federal’ model, as five largely autonomous teams: 80,000 Hours, Giving What We Can, Global Priorities Project, Effective Altruism Outreach, and the Central team.
- Four of these teams (EAO, GWWC, GPP and the Central team) are merging into the same management structure, and from now on will operate as a single unit. 80,000 Hours will continue as an autonomous organisation, based in the Bay, though still fiscally sponsored by CEA.
- CEA will therefore act much more like a unified organisation than it has done in the past. This newly-defined CEA will be led by myself (Will MacAskill); I will therefore be playing a much more active role in the management of CEA than I have in the past.
Background
In the past, CEA has been the umbrella organisation for a collection of different nonprofits. As of the end of last year, these consisted of: Giving What We Can, 80,000 Hours, the Global Priorities Project (in collaboration with the Future of Humanity Institute), and Effective Altruism Outreach. There was also a Central team, which did shared operations for all the other projects.
The arguments for running CEA as a collection of separate organisations included the following:
- By experimenting with different ideas, cultures, and approaches, we could learn what worked best and focus on that.
- We allow different projects to promote messages that may differ in tone, content, or emphasis.
- We could target different organisations to different demographics.
- Sometimes, running separate projects would increase our available resources. (For example, a donor might only be interested in one project).
For example, in 2011 we decided to set up 80,000 Hours as a distinct organisation from GWWC because:
- We worried that 80k might become a lot more controversial than GWWC, because of the idea of earning to give, and we wanted to keep these messages separate.
- Some people felt much more excited by the potential impact of 80k, whereas others felt more excited by the potential impact of GWWC.
- We thought that sorts of people who would be interested in socially-oriented career advice was very different from sorts of people who were interested in donating.
- Having separate organisations allowed each team to focus entirely on their own project.
The founding of new projects within CEA then progressed on the model we’d set up with 80,000 Hours and GWWC.
Recently — due in significant part to changes in our situation — we started to become less convinced that the federal model was the optimal way to structure all the different relationships between the different projects in CEA.
- The reasons in favour, though still present, in some cases became weaker:
- We felt that it’s easier to quickly experiment and scale up or scale down projects within one organisational structure, rather than when those projects are run as separate organisations.
- We found that the different messages naturally came to be closely associated. Because of the rise of ‘effective altruism’ as a term, people would refer to CEA as a single organisation (despite our initial intentions of this organisational label not to be public-facing).
- Because we now have much greater access to funding and potential employees than we did previously, the argument from additional resources no longer has the same force.
Moreover, there were some cases where the different organisations led to confusion:
- Sometimes different organisations ended up running similar projects: for example, EA Build (run by EAO) and GWWC were both trying to help grow EA local groups. This created confusion for staff, donors, for EA community members, and for third parties interested in the work of CEA.
- Internally, there would often be decisions that affected all the different organisations. These decisions would be made by the “Senior Management Team”, consisting of leaders of all the different organisations. This decision-making process often felt slow, bureaucratic and unnecessarily complicated.
Finally, there was an unusual opportunity for this change to happen. I had significantly more free time as a result of (i) the launch of the book and subsequent media activity dying down; (ii) renegotiating my contract with the University, resulting in a much lower teaching load. This gave us the unusual opportunity for CEA to be led by someone with many years of experience working with each of the individual organisations within CEA. At the same time, leadership of the different organisations within CEA felt excited by the prospect of being able to work together and rally around a single shared vision.
What changes are happening at CEA
We’re unifying the teams that compose GWWC, EAO, GPP and CEA Central. We’ll divide CEA into a Community & Outreach Division and a Special Projects Division. The Community and Outreach Division will focus on the ‘core’ CEA activity, which is helping to grow and strengthen the EA community. This includes our on-line presence, local groups, EA Global, EAGx, media, marketing, and the Giving What We Can Trust. The Special Projects Division will have three aims: high net worth philanthropic advising, policy, and fundamentals (explained more below); the former is a continuation of the research arm of GWWC; the latter two are continuations of the two aims of GPP, separated into different teams.
80k will still operate independently; and we would encourage people to regard 80k as separate entity from CEA (though CEA will remain as a fiscal sponsor of 80k). The case for also merging the 80k team with the other teams seemed weaker for a number of reasons: 80k was already by far the most autonomous of the organisations under CEA’s umbrella; and it had not faced the issue of overlap with other organisations within CEA.
I am taking on the role of CEO of CEA (in addition to being a Trustee). This means I will spend far more time guiding and managing CEA than I have done in the past. Tara Mac Aulay will continue as COO and lead the Community & Outreach Division. Michael Page, a new hire, will lead the Special Projects Division. Kerry Vaughan will continue to work on EA community-building and Seb Farquhar will continue to work on policy. Michelle Hutchinson will transition from running GWWC to helping to set up an Oxford Institute for Effective Altruism (explained more below).
How will that change what you see from CEA?
In the short term, not all that much. We’ll continue to promote effective giving under the banner of ‘Giving What We Can’, keeping its pledge, website etc. The main change for it is that people will work on it from within a unified team rather than in its own, siloed team. We’re reasonably likely to deprioritise or discontinue the GPP label, and we will not continue with the EAO label.
In the mid term, our aims include the following:As a default, we plan to continue to grow at at least the rate that we’ve grown in the past (which has meant doubling in size approximately every 18 months).
- Development of and greater focus on effectivealtruism.org
- Greater focus on understanding the desires of the EA community, and using that as an input to decisions about what projects to try or prioritise.
- Greater focus on increasing the impact of those who already self-identify as EAs and on increasing the potential benefits of EA as a community.
- Greater focus on:
- Cause neutrality (rather than a focus on global poverty)
- Means neutrality (rather than a focus on donation)
- Greater focus on intellectual development of EA, including on high-level theory.
Some other changes include the following:
Local Groups
We’ll encourage people to call new groups an “EA local group” rather than a “GWWC local group”. (We currently give people the choice and in most cases local group founders choose to refer to themselves as EA local groups.) However, if people want to run as a GWWC group and not an EA group, we won’t force the issue.
Charity Research
Our main focus within charitable research will be experimenting with a new project: boutique philanthropic advice to major donors. We’ve found that there is notable demand for evidence-based charity research from major donors who, for a variety of reasons, do not want simply to support GiveWell’s top charities or to work with Open Philanthropy. (Often, the donor is interested in a particular cause area, such as disaster preparedness, that is different from the work of GiveWell’s top charities.)
We’ve already been experimenting with this project over the last six months. People we’ve provided advice for include: entrepreneurs who have taken the Founders’ Pledge and exited; private major donors who contacted us as a result of reading Doing Good Better; former Prime Minister Gordon Brown, for his International Commission on Financing Global Education Opportunity; and Alwaleed Philanthropies, a $30 billion foundation focused on global humanitarianism. This project is still very much in its infancy and we’ll assess its development on an ongoing basis.
Within global health and development, we will move to simply recommending GiveWell’s top charities, rather than curating an independent but overlapping list of recommended charities based in large part on their research (as we do now). In the past, the existence of two similar lists of recommended charities has created confusion, and we feel that the amount of value to be gained from doing work so similar to GiveWell’s is comparatively small relative to our other research opportunities.
It’s possible we may, in addition, point readers to charities in cause areas outside of global health and development, with the caveat that such charities will not have had the same level of assessment as GiveWell’s top charities.
Policy
Both our policy work and our fundamentals research will continue the work done by GPP, though now split into two separate teams.
We think that policy is an important area for effective altruism to develop into, and we feel we have had some significant success within policy so far. Recent developments in British politics mean that our plans regarding our policy work are currently in flux; depending on how this plays out, we could do considerably more or considerably less policy work.
Fundamentals Research
Partly due to demand from some members of the EA community, we’ll be experimenting with doing more theoretical research on effective altruism. This comes in two main categories: ‘crucial considerations,’ or ideas that have the potential to radically change how we evaluate our options; and ‘cause prioritisation’, or research on how to figure out which cause-areas one ought to focus on. We believe that this work is both extremely important and extremely hard to do, and will assess our progress on this front on an ongoing basis.
Oxford Institute for Effective Altruism
We have plans to set up an academic institute focused on effective altruism, based at Oxford University. We hope that it can begin as of October 2017, though this is contingent on successful grant applications. The Institute will work on theoretical issues that arise from the project of trying to do the most good, straddling philosophy, economics, and other relevant fields, producing research that is suitable for publication in academic journals. The aim is for this to be run by Hilary Greaves as Research Director and Michelle Hutchinson as Operations Director. We believe that this represents an exciting opportunity to help create and shape effective altruism as an academic research field.
We’ll write more about our plans, including elaborating on some of the projects listed above, in the near future.
Exciting stuff!
Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki? If I remember correctly, GiveWell was originally "The Clear Fund", and their comparative advantage was supposed to be making the research behind their grants public, instead of keeping research to themselves like most foundations. Making research public lets people criticize it, or base their giving off of it even if they didn't request it. See also. There are certainly reasons to stay quiet in some cases, and I could understand why donors might not want their names announced, but it feels like the bias should be towards publishing.
I'd also challenge you to think about what CEA's "secret sauce" is for doing this research for donors in a way that's superior to whatever other group they would consult with in order to have it done. I'm not saying that you won't do a better job, I'm just saying it seems worth thinking about.
Some people have argued against this. I'm also skeptical. My sense is that
This is an area where it plausibly does make sense to use a non-CEA label, since as soon as you step in to the political arena, you are inviting people to throw mud at you.
The highest leverage interventions may be at the meta-level. For example, creation of a website whose discussion culture can stay friendly and level-headed even with many participants--I suggested how this might be done at the end of this essay. Or here's a proposal for fighting filter bubbles.
I'm generally skeptical that the intuitions which have worked for EA thus far will transfer well to the political arena. It seems like a much different animal. Again, I'd challenge you to think about whether this is your comparative advantage. The main advantage that comes to mind is that CEA has a lot of brand capital to spend, but doing political stuff is a good way to accidentally spend a lot brand capital very quickly if mud is thrown. As a flagship organization of the EA movement, there's also a sense in which CEA draws from a pool of brand capital that belongs to the community at large. If CEA does something to discredit itself (e.g. publicly recommends a controversial policy), it's possible for other EA organizations, or people who have identified publicly as EAs, to catch flak.
As a broad question: I understand it's commonly advised in the business world to focus on a few "core competencies" and outsource most other functions. I'm curious whether this also makes sense in the nonprofit world.
Thanks so much for this comment!
Yes, the default will be that everything we produce is published openly.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we'd call EA charity recommendations. There's GiveWell / Open Phil, there's philanthropic advising that's very heavily about understanding the preferences of the donor and finding charities that 'fit' those preferences, and there seems to us to be a very significant gap in the middle.
In response to the linked-to article and notes: 1. I'm intuitively also very wary of EA engaging in partisan politics. Indeed, when I think of EA as applied to politics, I think of it as almost being defined by being non-partisan, opposed to tribal politics: where you come to views on policy on a case-by-case basis, weighing all the best evidence, deeply understanding all the various viewpoints (to the point of passing ideological Turing tests), being highly self-sceptical and looking out for ideological bias; 2. It's also a major issue that whether certain policies are even good or bad can be incredibly difficult to know. E.g. when I think about AI policy, I can think of things where I know the magnitude of the impact of the policy would be very great indeed, but have no idea about the sign of the impact. Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn't thought about political equilibrium effects).
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
I agree that we need to think very carefully about what labels we use, and we should be very concerned with how the term 'effective altruism' might come to lose its meaning and value, or become the victim of malicious PR.
Because of this general principle, I stress a lot about how many different things CEA is doing. I'm not sure whether the general principle is right for the sort of organisation we are, and we're the exception to it, whether the principle just isn't right for the sort of organisation we are, or whether we're being irrational. My current instinct is that we should be aiming to focus more than we have done, and that we've just taken a good step in that direction.
Seems pretty convincing. This work also seems somewhat well suited to CEA, since you're a natural point of contact for people interested in giving better, and large donors will be more impressed by recommendations made by an Oxford-affiliated organization.
I agree that it seems like a big important lever, but I'm less certain that it's a good fit for the profile of strengths the EA movement has currently built up. If someone was to create an app that made running ideological turing tests easy, and EAs in charge of policymaking were passing them at a much higher rate than matched controls with comparable education and ability, that's the kind of thing that might convince me that policy was a comparative advantage. (Same for winning bets about the results of particular policies with matched controls.) So far, I've seen much more focus on e.g. creating people with high earning careers than creating people who score well according to these criteria. (Although that's not the only conceivable approach--one could imagine the EA movement pushing for the legalization of prediction markets to outsource the work of making accurate predictions, for instance.)
It seems unlikely that CEA could engage in politics in a non-partisan fashion if you can't even write a paragraph about being skeptical of partisan politics without resorting to partisan politics.
Pro EU immigration as opposed to Pro EU, that's still a policy by policy basis.
The true underlying objection to partisan politics isn't that it involves political parties, it's the tribal effects, which occur equally with immigration or brexit.
I don't know how much you know about policy work by EA organizations besides CEA/GPP, so I thought I'd fill you in. There's a lot going on.
are all doing policy work. That's five different organizations closely associated with effective altruism working on policy in three different countries (United States; United Kingdom; Switzerland). Even if we discount SAIRC's association with EA, that' still at least four organizations. I don't know how much support policy work has in the EA community at large, outside of all these organizations, but I'm assuming if it's enough that the sentiment won't go away soon. It seems the effective altruism movement will be interested in policy work even if CEA itself isn't.
I doubt there's currently much value to be had in coordinating policy efforts between different countries. Within the EA community, solidarity to work on policy internationally, and sharing resources/research/talent between organizations might be valuable.
You said CEA has a lot of brand capital it would be sad to see blown on political projects which don't bear fruit, and may hurt CEA's and effective altruism's reputation. I think CEA has more brand capital than these other organizations, except perhaps Open Phil. Of course, Open Phil is in the (non-profit) business of grantmaking, so their influence on policy will be through other organizations. This may distance them from controversy or blowback for programs run by their grantees, which are probably more experienced in navigating potential pitfalls of policy work anyway.
Sentience Politics and SEA/EAF seem likely to escalate rather than de-escalate policy work in the near future. If either of them discredits themselves, it might only hurt the EA brand in the German-speaking world and Scandinavia, or perhaps continental Europe. However, the work SEA/EAF has done to spread and grow effective altruism in Europe, and the projects this has enabled, seems to me one of the most promising initiatives in the whole community. So, they hold much of EA's potential in their hands.
Anyone of the opinion effective altruism should be warier of entering the field of policy needs to keep these considerations in mind, not just what CEA does.
CEA is getting good at policy now. They have some experience with advising, and some contacts in the major parties, and can cause some changes in where major amounts of funds go. Obviously there are massive amounts of moveable funds in the public sector, and it's hardly a matter of lobbying in direct opposition to major established interests, but about choosing important issues like aid effectiveness or risky tech that political ideology is more neutral on. And you can certainly advise on such topics while remaining above the political fray. Whether to be drawn into ideological arguments in exchange for additional hort term policy gains is a somewhat separate question.
So it doesn't make sense at all that you'd be sceptical about political intervention by CEA.
I agree. As the flagship organisation, CEA stepping into politics is unnecessarily risky. Why not let other smaller organisations experiment with this first?
Wikipedia's policies forbid original research. Publishing the research on the organization's website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)
I think this is worth mentioning because I've seen some embarrassing violations of Wikipedia policy on EA-related articles recently.
If someone at CEA reads a bunch of studies on a particular topic, and writes several well-cited paragraphs that summarize the literature, this would be appropriate for Wikipedia, no? (I agree other ways of interpreting "research" might not be.)
This might be alright. See these guidelines though: https://en.wikipedia.org/wiki/Wikipedia:No_original_research#Synthesis_of_published_material
Awesome! Really great to see the move towards consolidating the many overlapping projects, something that's made me skeptical of a number of them in the past. (Also excited that you'll be more directly involved!) This makes me a lot more excited about CEA.
How will fundraising work under the new structure?
Thanks!
Fundraising: The current plans it that CEA will fundraise for all projects (with me as lead on that). We'll update all donors every two weeks with info across all CEA projects (most individual projects already do this to their own donors), and have an annual review.
Earmarking: Fungibility has been a headache since forever; and in the past 'restricting' to a particular project, even though we were very careful with the budget lines, wouldn't completely avoid fungibility concerns, because other donors are responsive to RFMF and would then become a little less likely to donate to a project that's received more money.
The idea that's currently in my head, but not (yet) a policy, is that we to a first approximation only accept unrestricted donations, but that every donor is asked to 'vote' by telling us how, ideally, they would want their donation to be used. This 'vote' isn't binding on CEA, but gives us useful information about what smart people with money on the line think CEA should be doing more of. I take the views of our donors very seriously - they tend to be the external people who are most highly engaged with CEA's work - and so it wouldn't at all just be for show. I'd welcome ideas about other ways of doing donations.
And to be clear, previously restricted money to a CEA project will still be used in the manner it was restricted for, under the new CEA structure, unless the donor tells us that they're happy to lift the restriction.
That's a cool idea. Obviously on that proposal, the donors should also be able to say 'i don't know' or indicate how confident they are.
I'm skeptical in general that having people give their confidence levels leads to better aggregate predictions/outcomes, given that most people are terrible at confidence calibration. At the least, this would underweight the opinions of well-calibrated people while overweighting those of overconfident people.
I just meant they'd indicate how confident they are (1-9).
I'd guess the best approach here should be basically copied from whatever the prevailing view is in the literature around consensus finding and aggregating opinions.
Awesome--please put me on the list when those updates start happening :)
I really like this idea. Hopefully donors are happy with it (I know I personally would be).
Exciting news! Seems like a very positive step!
Really cool that you'll be more actively involved with CEA. Could you give some more clarity into what your role will be/how management responsibilities will be distributed? My naïve guess would be that you've got a big comparative advantage in a lot of areas, but not necessarily as a manager (particularly with lots of integration work coming up). I've got a hunch you've thought about replaceability issues in career choice, so I'd love to hear your thoughts on what you'll personally be focusing on and why.
People have argued for i) flatter organizational structure ii) pivoting from charity evaluation to more fundamental research (in order to add more value over and above GiveWell), and iii) growing emphasis of the EA brand for a while, so it's good to see this feedback incorporated.
The Institute for EA and the reported success with high net-worth outreach are awesome developments, as is Will's direct participation.
Great news, and cheers on all of your terrific work!
Thanks!
Yeah, I want CEA strategy to be guided significantly by the views of engaged members of the EA community. (Of course, that doesn't mean we'll always go with others' views (not least because different people regularly disagree)). This, it seems to me, has both inside and outside view support. Inside view: when I talk to engaged EAs, they often have interesting and well-reasoned views about what CEA should or should not be doing. Outside view: the current dedicated EAs are the equivalent of the 'early users' of EA as an idea, and the standard advice for startups is to pay a huge amount of attention to what early users want, and be responsive to that. I also simply see CEA's role in significant part as to serve the EA community, so it's therefore obviously important to know what that community thinks is most important.
"Early users" of EA would be the beneficiaries, not the participants, right? This relates to the fundamental reason why charities can get away with being ineffective--the sentient beings receiving the benefit are not the ones deciding to contribute money and effort. You goal shouldn't be to please EAs, it should be to help people. Usually, pleasing donors doesn't align with helping people, although that's probably less true in CEA's case.
This is all sounds great, thanks for a full briefing!
I'm particularly excited about two bits.
the shift of focus to more theoretical research. I've been worrying for a while that's there hasn't been enough discussion/openess/clarity about the theoretical justifications underpinning various conclusions. Most obviously, I've noticed that most members of EA 'high command' (e.g. Will MacA, Ben Todd, Rob Wiblin) adopt some form of total utilitarianism, whereas a sizeable % of EAs I speak to (maybe 20-40%) are more inclined to a person-affecting view, are a bit sceptical of prioritises X-risk over helping people in the here and now. I obviously don't mind people disagreeing, but I don't feel that person-affecting views are given a fair whack o' the whip, or even what the relevant forum would be to bring these concerns to at present. I'm sure others will have other theoretical considerations they think are neglected.
I think the Oxford Institute for Effective Altruism sounds awesome. In part because it may be a partial answer to the above, and also, on a more personal basis, because I may find a home for some of the interdisciplinary research I want to do that sits awkwardly in a philsophy department.
Building on Gleb's comment, I'm curious to see how the new Community & Outreach Division will work with other organizations working in the focus area of 'Community & Outreach'. At different times over the past year, I've considered 'movement growth', 'movement development', and 'increasing coordination' to be among the most promising focus areas in EA[1]. Anyway, going forward, I suggest we (internally) refer to all this just as "Community and Outreach" for ease of use.
Anyway, I'm impressed with the work of LEAN/.impact, GWWC, and EAF/SEA to dramatically increase growth and access to resources for local groups in the last year. I would've endorsed one of those as my top charity pick for this last year had I bothered to better assess the differences of impact between them.
Talking with Tom Ash and others, I learned LEAN, GWWC, SEA/EAF, and EA Outreach (EAO) were all working together on community & outreach. It confused me EAO and GWWC were working separately even though both were under the CEA umbrella. Also, I couldn't get a handle on what EAO was doing in this network that was unique.
I've been assuming that because EAO shared staff with EA Ventures, the EAG planning team, and other projects, there have been times in the past year when EAO has been on the backburner, i.e., not an active focus of the U.S.-based CEA team. Please correct me if this assumption is wrong.
I think it makes sense to collapse EAO under the umbrella of the new Community & Outreach division. Will this division still have any operations based in the United States? What will happen to the U.S.-based team working for the CEA? Will all the same staff be kept on working in similar roles in the new division?
Also, I suggest doing some kind of 'exit assessment' as EA Outreach winds down its operations. I think it'd be a shame if EAO was collapsed to reduce redundancy, but the new division and everyone in Oxford didn't take the opportunity to learn from the experiences, the successes and trials, the U.S.-based team has faced with the novel projects they've worked on this year (e.g., Pareto Fellowships, EAGx, etc.)
[1] This is largely because I'm personally quite uncertain between object-level causes. I think others could very reasonably disagree with me on whether meta-level foci like 'community and outreach' or 'cause prioritization' are better to currently work on than poverty alleviation, x-risk mitigation, or animal advocacy.
Thanks! Lots of points here.
One thing: despite the confusing name, from CEA's perspective, EAO was the organisation that included EAG, EAV as parts.
Working with other groups: I hope the new structure will make it quite a bit easier for other groups to co-ordinate with CEA, because the structure will be substantially simpler.
'Exit assessment': This is slightly complicated by the fact that there's no simple "we tried this project and it didn't work" story here. But I do hope to be able to write more about what things we've learned at CEA in the near future.
Second the idea of an "exit interview/what we learned." This would be helpful for the broader movement as a whole, to optimize operations/reduce mistakes.
After all, CEA is not the only organization that houses a number of units. As Evan pointed out in response to my comment below, SEA/EAF houses a number of orgs. So does .impact, with the Local Effective Altruism Network, Students for High-Impact Charity, etc.
Other orgs are taking on and collaborating on meta-projects, for example the EA Marketing Resources Bank, and it would be good to learn from CEA's experience.
Effective Altruism certainly has the conceptual richness to support a research institute and I shall look forward to the development of the proposed Oxford Institute for Effective Altruism with considerable enthusiasm. In terms of supporting the future intellectual development of the field I hope the Institute will deliver (or contribute) to taught programmes at Oxford and build up a significant postgraduate research community. A research focus on crucial considerations and cause prioritisation is also appealing because (a) these are extremely powerful, but relatively neglected ideas and (b) when they are linked to “cause neutrality” and “means neutrality” they can become the basis for effective practical action in many diverse domains that have no connection to global philanthropy. For example, I am interested in the application of EA principles in university administration and regional development. There must be many others who have fairly constrained responsibilities and hypothecated budgets who nevertheless want to use the concepts and methods of EA to do the best they can. The new Institute should help them do that.
From time to time I worry that the ideas that make EA so interesting also constitute a barrier to effective outreach. When newcomers engage with the movement and its literature they must often be surprised by the relatively few steps from a concern with global poverty to AI risks, Dyson spheres and von Neumann probes. This is exciting stuff, but many who want to do good better are never going to be interested in things like existential risks, Bayes’ Theorem or cognitive biases - as important and relevant as these things are. I think we have to accept that the intellectual appeal and the practical appeal of EA are never going to converge for many and ensure that the way things are organised reflects this dichotomy.
80,000 Hours is probably the most accessible branch of the EA movement and I hope that after its move to the Bay it will consider a partnership with CFAR to develop a programme to deliver practical transferable skills based on EA principles. I think this would have enormous appeal to many of its clients.
I've thought for a long time CFAR and 80k have much in common, so I'm glad to see others are thinking about it!
We promote CFAR workshops to 80k users, and CFAR often offers discounts to those who are altruistically motivated. CFAR has also increased its focus on doing good in the last year: http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/#ambitions
Thanks for the response! Now that you'll be closer to CFAR, are there more new ways you'll be collaborating with them?
I'd really like to see CFAR workshops available in the UK too. Is this something CEA/80,000 Hours might be able to facilitate?
Useful to know what the plan is for the GWWC Trust, if GWWC are not producing their own recommendations? Will any money going into the trust just be donated to GiveWell's top charities whatever they may be, and will it be donated evenly to those charities or donated following GiveWells current advice about proportions? Thanks
Hey Sam, For people who choose to let us decide where the money goes, the next payout (Oct) will be the same as before (1/4 each to SCI, AMF, DWI, PHC), and the one after that (Jan) will be to on the allocation GW recommends in its Dec update. I expect we will continue allowing donations to the charities the Trust has given to in the past (eg PHC, IPA), but that the default charities suggested for donations will be the ones GW lists as top charities.
What led to the decision for 80k to move to the SF Bay Area?
Fully in support of the majority of these changes. Well done taking such a big step!
I appreciate the clear reasoning for the changes taking place. Glad especially to see the new prioritization of fundamental research, it's something sorely lacking in the movement right now.
I'm curious how these changes will impact the collaboration of CEA with other EA meta-charities outside the CEA's current umbrella. For example, the Local Effective Altruist Network and The Life You Can Save also support some local groups uniting Effective Altruists, and have been collaborating with GWWC around that. I and I imagine other forums readers have similar kinds of questions around CEA's collaborations with Animal Charity Evaluators, Students for High-Impact Charity, Intentional Insights, .impact, Effective Altruism Foundation, Charity Science, etc.
ACE and TLYCS were incubated as projects at the CEA before they became separate organizations independently run out of the U.S. I assume they still have a working relationship they leverage to coordinate joint projects. If I'm not mistaken, CEA has collaborated with ACE in the last year on laying the groundwork for one or more effective animal advocacy (EAA) conferences in 2016. CEA also incubated and has worked closely with the Stiftung fer Effektiver Altruismus/Effective Altruism Foundation (SEA/EAF). They're based out of Switzerland, but also do lots of work in Austria and Germany, and they're the parent foundation to Raising for Effective Giving (REG), Sentience Politics and the Foundational Research Institute (FRI).
What are the uncertainties involved? What sort of events would lead you to do considerably more policy work? What sort of events would lead you to do considerably less policy work? Can you say anything about that?
I'm sorry we can't say more at this stake. One downside of policy work is that much more of the work can't always have the same level of transparency as other projects.