Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally - but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community - especially given that their major investors did not know - but it’s a symptom that reinforces many of the earlier complaints.
The solutions seem unclear, but there are two very different paths that would address the failure - either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.
“The community” failed?
Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled.
The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.
But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.”
A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.
Failures of Trust
There are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.
To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building.
I think during my second year, someone had something stolen from their desk - I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expelled, and that they really didn’t want to review the security camera footage, but they would if they needed to. It never occurred to me that there were cameras - but of course there were, if only because RAND has a secure classified facility on campus, and security officers that occasionally needed to respond to crazy people showing up at the gate. That meant they could trust, because they can verify.
Similarly, the time sheets for billing research projects, which was how everyone including the grad student got paid, got reviewed. I know that there were flags, because another graduate student I knew was pulled in and questioned for billing two 16-hour days one week. (They legitimately had worked insane hours those two days to get data together to hit a deadline - but someone was checking.) You can certainly have verifiably high trust environments if it’s hard to cheat and not get caught.
But EA was, until now, a high-trust group by default. It’s a huge advantage in working with others. knowing you are value aligned, that you can assume others care, and that you can trust them, means coordination is far easier. The FTX incident has probably partly destroyed that. (And if not, it should at least cause a serious reevaluation of how we use social trust within EA.)
Restoring Trust?
I don’t think that returning to a high-trust default is an option. Instead, if we want to reestablish high trust throughout the community, we need to do so by fixing the lack of basis for the trust - and that means institutionalization and centralizing. For example, we might need institutions to “credential” EA members or at least institutions, perhaps to allow democratic control, or at least clarity about membership. Alternatively, we could double-down on centralizing EA as a movement, putting even more power and responsibility on whoever ends up in charge - a more anti-democratic exercise.
However we manage to rebuild trust, it’s going to be expensive and painful as a transition - but if you want a large and growing high trust community, it can’t really be avoided. I don’t think that what Cremer and Kemp suggest is the right approach, nor are Cremer’s suggestions to MacAskill sufficient for a large and growing movement, but some are necessary, and if those measures are not taken, I think that the community should be announcing alternative structures sooner rather than later.
This isn’t just about trust, though. We’ve also seen allegations that EA as a community is too elitist, that it’s not a safe place for women, that it’s not diverse enough, and so on. These are all problems to address, but they are created by a single decision - to have an EA community at all. And the easy answer to many problems is to have a central authority, and build more bureaucracy. But is that a good idea?
The alternative is rethinking whether EA should exist as a community at all. And - please save the booing for the end - maybe it shouldn’t be one.
What would it mean for Effective Altruism to not be a global community?
Obviously, I’m not in charge of the global EA community. No one is, not even CEA, with a mission “dedicated to building and nurturing a global community.” Instead, individuals, and by extension, local and international communities are in charge of themselves. Clearly, nobody needs to listen to me. But nobody needs to listen to the central EA organizations either - and we don’t need to, and should not, accept the status quo.
I want to explore the claim that trying to have a single global community is, on net, unhelpful, and what the alternative looks like. I’m sure this will upset people, and I’m not saying the approach outlined below is necessarily the right one - but I do think it’s a pathway we, yes, as a community, should at least consider.
And I have a few ideas what a less community-centered EA might look like. To preface the ideas, however, “community” isn’t binary. And even at the most extreme, abandoning the idea of EA as a community would not mean banning hanging out with other people inspired by the idea of Effective Altruism, nor would it mean not staying in touch with current friends. It would also not mean canceling meet-ups or events. But it does have some specific implications, which I’ll try to explore.
Personal Implications
First, it means that “being an EA” would not be an identity.
This is probably epistemically healthy - the natural tendency to defend an in-group is far worse when attacks seem to include you, instead of attacking a philosophy you admire, or other individuals who like the same philosophy. I don’t feel attacked when someone says that some guy who reads books by Naomi Novik is a jerk[1], so why should I feel attacked when someone says a person who read and agreed with “Doing Good Better” or “The Precipice” is a jerk?
Not having EA as an identity would also mean that public relations stops being a thing that a larger community cares about - thankfully. Individual organizations would, of course, do their own PR, to the extent that it was useful. This seems like a great thing - concern about community PR isn’t a good thing for anyone to care about. We should certainly be concerned about ethics, and not doing bad things, not the way it looks.
Community Building Implications
Not having EA as a community obviously implies that “EA Community Building” as a cause area, especially a monolithic one, should end. But I think in retrospect, explicitly endorsing this as a cause to champion was a mistake. Popularizing ideas is great, bringing people with related interests is helpful, but there are some really unhealthy dynamics that were created, and fixing them seems harder than simply abandoning the idea, and starting over.
This would mean that we stopped doing “recruitment” on college campuses - which was always somewhat creepy. Individual EAs on campus would presumably still tell their friends about the awesome ideas, recommend books, or even host reading groups - but these would be aimed at convincing individuals to consider the ideas, not to “join EA.” And individuals in places with other EAs would certainly be welcome to tell friends and have meet-ups. But these wouldn’t be thought of as recruitment, and they certainly wouldn’t be subsidized centrally.
Wouldn’t this be bad?
CEA’s web site says “Effective altruism has been built around a friendly, motivated, interesting, and interested group of people from all over the world. Participating in the community has a number of advantages over going it alone.” Would it really be helpful to abandon this?
My answer, tentatively, is yes. Communities work well with small numbers of people, and less well as they grow. A single global community isn’t going to allow high trust without building, in effect, a church. I’m fairly convinced that Effective Altruism has grown past the point where a single community can be safe and high trust without hierarchy and lots of structure, and don’t know that there’s any way for that to be done effectively or acceptably.
Of course, individuals want and need communities - local communities, communities of shared interest, communities of faith, and so on. But putting the various parts of effective altruism into a single community, I would argue, was a mistake.
More Implications, and some Q&A
Would this mean no longer having community building grants, or supporting EA-community institutions?
First, I think that we should expect communities to be self-supporting, outside of donor dollars. Having work spaces and similar is great, but it’s not an impartially altruistic act to give yourself a community. It’s much too easy to view self-interested “community building” as actually altruistic work, and a firewall would be helpful.
Given that, I strongly think that most EAs would be better off giving their 10% to effective charities focused on the actual issues, and then paying dues or voluntarily contributing other, non-EA-designated funds for community building. That seems healthier for the community, and as a side-benefit, removes the current centralized “control” of EA communities, which are dependent on CEA or other groups.
There are plenty of people who are trying to give far more than 10% of their income. Communities are great - but paying for them is a personal expense, not altruism. And from where I stand, someone who is giving half their salary to the “altruistic cause” of having community events and recruiting more people isn’t effective altruism. I would far rather have people giving “only” 10% to charity, and using their other money for paying dues towards hosting or helping to subsidize fun events for others in their community, or paying to work in a EA-aligned coworking space.
Similarly, college students and groups that wanted to run reading clubs about EA topics would be more than welcome to ask alumni or others to support them. There is a case to be made for individuals spending money to subsidize that - but things like community retreats should be paid for by attendees, or at most, should be subsidized with money that wasn’t promised to altruistic causes.
What about EA Global?
I think it would mean the end of “EA Global” as a generic conference. I have never attended, but I think having conferences where people can network is great - however, the way these are promoted and paid for is not. Davos is also a conference for important people to network - and lots of good things are done there, I am sure. We certainly should not be aiming for having an EA equivalent.
Instead, I would hope the generic EA global events are replaced by cause and career specific conferences, which would be more useful at the object level. I also think that having people pay to attend is good, instead of having their hotel rooms and flights paid for. If there are organizations or local groups that send people, they would be welcome to pay on behalf of the attendees, since they presumably get value from doing so. And if there are individuals who can’t otherwise afford it, or under-represented groups or locations, scholarships can be offered, paid for in part by the price paid by other attendees, or other conference sponsors. (Yes, conferences are usually sponsored, instead of paid for by donations.)
Wouldn’t this make it harder for funders to identify promising younger values aligned people early?
Yes, it would. But that actually seems good to me - we want people to demonstrate actual ability to have impact, not willingness to attend paid events at top colleges and network their way into what is already a pretty exclusive club.
Wouldn’t this tilt EA funders towards supporting more legibly high-status people at top schools?
It could, and that would be a failure in design of the community. And that seems bad to me, but it should be countered with more explicitly egalitarian efforts to find high-promise people who didn’t have parents who attended Harvard. But that isn’t what paid and exclusive conferences will address. Effective Altruism doesn’t have the best track record in this regard, and remedies are needed - but preserving the status quo isn’t a way to fix the problem.
Should CEA be defunded, or blamed for community failures?
No, obviously not. This post does explicitly attack some of their goals, and I hope this is taken in the spirit it is intended - as exploration and hopefully constructive criticism. They do tons of valuable work, which shouldn’t go away. If others agree that the current dynamics should change, I am still unsure how radically CEA should change direction. But if the direction I suggest is something that community members think is worth considering, CEA is obviously the key organization which would need to change.
Is this really a good idea?
It certainly isn’t something to immediately do in 2023, but I do think it’s a useful direction for EA to move towards. And directionally, I think it’s probably correct - though working out the exact direction and how it should be done is something that should be discussed.
And even if people dislike the idea, I hope it will prompt discussion of where current efforts have gone wrong. We should certainly be planning for the slightly-less-than-immediate term, and publicly thinking about the direction of the movement. We need to take seriously the question of what EA looks like in another decade or two, and I haven’t seen much public thinking about that question. (Perhaps longtermism has distracted people from thinking on the scale of single decades. Unfortunately.)
But rarely is a new direction something one person outlines, and everyone decides to pursue. (If so, the group is much too centrally controlled - something the founders of EA have said they don’t want.) And I do think that something like this is at least one useful path forward for EA.
If EA individuals and groups take more of this direction, I think it could be good, but details matter. At the same time, trajectory changes for large groups are slow, and should be deliberated about. So the details I’ve outlined have been trying to push the envelope, and prompt consideration of a different path we could take than the one we are on.
- ^
I promise I picked this as an example before Eliezer wrote his post. Really.
In the most respectful way possible, I strongly disagree with the overarching direction put forth here. A very strong predictor of engaged participation and retention in advocacy, work, education and many other things in life is the establishment of strong, close social ties within that community.
I think this direction will greatly reduce participation and engagement with EA, and I'm not even sure it will address the valid concerns you mentioned.
I say this despite the fact that I didn't have super close EA friends in the first 3-4 years, and still managed to motivate myself to work on EA stuff as well as policy successful advocacy in other areas. When it comes to getting new people to partake in self-motivated, voluntary social causes/projects, one of the first things I do is to make sure they find a friend to keep them engaged, and this likelihood is greatly increased if they simply meet more people.
I am also of the opinion that long-term engagement relying on unpaid, ad-hoc community organising is much more unreliable than paid work. I think other organisers will agree when I say: organising a community around EA for the purpose of deeply engaging EAs is time-consuming, and great... (read more)
What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation. It's not really clear to me that this is true. The crux of your argument seems to come from this paragraph:
Would this have been any different if EA consisted of an archipelago of affiliated groups? If anything, Whistleblowing is easier in a large group since you have a network of folks you can contain to raise the alarm. Without a global EA group, who exactly do the ex-Alameda folks complain to? I guess they could talk to a journalist or something, but "trading firm CEO is kind of an amoral dick" isn't really newsworthy (I'd say that's proba... (read more)
"What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation."
1. The community is unhealthy in various ways.
2. You're suggesting centralizing around high trust, without a mechanism to build that trust.
I don't think that the EA community could have stopped SBF, but they absolutely could have been independent of him in ways that mean EA as a community didn't expect a random person most of us had never heard of before this to automatically be a trusted member of the community. Calling people out is far harder when they are a member of your trusted community, and the people who said they had concerns didn't say it loudly because they feared community censure. That's a big problem.
FWIW, I've generally assumed that causality goes the other way, or a third factor causes both.
Haven't finished reading yet, but I feel obliged to flag* (like anywhere else where they come up) that this paragraph:
is linking to a known cult leader. This is deeply ironic.
*The reason I think this should be stated every time is that there are many new people coming in all the time, and it's important that none of them encounter these people without the corresponding warning.
I think this comment would be much more helpful if it linked to the relevant posts about Leverage rather than just called Geoff a "known cult leader".
(On phone right now but may come back and add said links later unless Guy / others do)
Upvoted despite disagreeing, since I think this is an important question to explore. But I'm puzzled by the following claim:
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people "joining EA", taking the GWWC pledge and/or going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about. Without addressing this head-on, I'm not sure which of the following you mean:
(1) An empirical disagreement: You deny that EA community-building is instrumentally effective for (indirectly) helping other, first-order EA causes.
(2) A moral/conceptual disagreement: You deny that indirectly causing good counts as altruism.
Can you clarify which of these you have in mind?
I took OP's point here to be that this logic looks suspiciously like the kind of rationalizations EA got its start criticizing in other areas.
"Why do they throw these fancy gala fundraising dinners instead of being more frugal and giving more money to the cause?" seems like a classic EA critique of conventional philanthropy. But once EA becomes not just an idea but an identity, then it's understood that building the community is per se good, so suddenly sponsoring a fellowship slash vacation in the Bahamas becomes virtuous community building. To anyone outside the bubble, this looks like just recapitulating problems from elsewhere.
Hmm, I think of the "classic EA" case for GiveWell over Charity Navigator as precisely based on an awareness that bad optics around "overhead", CEO pay, fundraising, etc., aren't necessarily bad uses of funds, and we should instead look at what the organization ultimately achieves.
I don't mean either (1) or (2), but I'm not sure it's a single argument.
First, I think it's epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, it's good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldn't be in EA because they aren't making the right choices. If your community isn't just about the eventual altruistic value they will create, those failure modes are less likely.
Second, it's easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donated - both community size and total dollars seem like an unfortunately easy attractor for this failure.
Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naïve EV calculations about increasing community size!
Separate but related to community, I think your point about identity, and whether fostering EA as an identity is epistemically healthy, is also relevant to (1).
Your analogy to church spoke very powerfully to me and to something I have always been a bit uncomfortable with. To me, EA is a philosophy/school of thought, and I struggle to understand how a person can "be" a philosophy, or how a philosophy can "recruit members".
I also suspect that a strong self-perception that one is a "good person" can just as often provide (internal and external) cover for wrong-doing as it can be a motivator to actually do good, as any number of high-profile non-profit scandals (and anecdotal experience from I'm guessing most young women who have ever been involved in a movement for change) can tell you.
I have nothing at all against organic communities, or professional conferences etc, but I also wonder whether there is evidence that building EA as an identity ("join us!") as opposed to something that people can do is instrumentally effective for first-order causes. Maybe it does, but I think it warrants some interrogation.
It's worth considering Eric Neyman's questions: (1) are the proposed changes realistic, (2) would the changes actually have avoided the current crisis, and (3) would its benefits exceed its costs generally.
On (1), I think David's proposals are clearly realistic. Basically, we would be less of an "undifferentated social club", and become more like a group of academic fields, and a professional network, with our conferences and groups specialising, in many cases, into particular careers.
On (2), I think part of our mistake was we used an overly one-dimensional notion of trust. We would ask "is this person value-aligned?", as a shorthand for evaluating trustworthiness. The problem is that any self-identified utilitarian who hangs around EA for a while will then seem trustworhty, whereas objectively, an act utilitarian might be anything but. Relatedly, we thought that if our leadership trust someone, we must trust them too, even if they are running an objectively shady business like an offshore crypto firm. This kind of deference is a classic problem for social movements as well.
Another angle on what happened is that FTX behaved like a splinter group. Being a movement means you can conv... (read more)
On the one hand, I have a strong urge to say something like: But David, community building is not only useful for "trust" and "vetting people"!
On the other hand - In the last 2.5 years as a community builder, I was fighting desperately trying to make EA groups more practical and educational, instead of social and network-based.
I'm not the only one. I know many other community builders who tried to argue that our resources should focus on "tools", and less on anecdotes about why the maximization mindset is important and anecdotes about the most pressing cause areas.
Instead, I think that the value that we provide should be something in the lines of providing them with actual tools for applying the maximization mindset, and for prioritizing their career/donation/research/etc opportunities by social impact.
Almost everyone I spoke with agreed with this notion, including multiple representatives from CEA - but nothing changed so far regarding groups' resources or incentives.
So, if the main value of community building was meant to be for vetting, then I'd say that the community failed. I don't strongly believe this is the case right now, but I think that many perceive th... (read more)
I really like this framing Gideon. It seems aligned with CEA's Core EA principles. I'd love EA to be better at helping people learn skills. One of our working drafts for an EA MOOC focuses more on the those core principles and skills. Is something like this work-in-progress closer to what you had in mind?
This seems very plausibly a better direction. I think we agree there is something wrong, and the direction you're pointing may be a better one - but I'm concerned, because I don't see a way to make an extent and large community shift, and think that we need a more concrete theory of change...
Speaking of which, "I still don’t know where to find a good, simple article or video that describes how to create a theory of change" - you should have asked! I'd recommend here and here. (I also have a couple more PDFs of relevant articles from classes in grad school, if you want.)
One small personal experience: I worked a non-EA job for three years. None of my close friends were interested in EA, and my job wasn’t in a highly impactful cause area. I developed some other interests during those years, reading a lot about startups and VC and finance. Despite my enthusiasm when I first read Peter Singer and Doing Good Better, I think my interest in working on EA topics could have slowly faded and been replaced with other interesting ideas.
The EA community was a big part of what kept me engaged with EA. This forum was a steady stream of information about how to do good in the world, and one that allowed me to voice my own opinions and have lots of interesting conversations. I attended two online EA Globals which mostly made me identify more as an EA. Later I went back to school, where the university EA group leader reached out and encouraged me to join a reading group. We had weekly dinners and great conversations, and only a few months later, I quit my part-time job at a for-profit startup and began working on AI Safety.
It’s hard to say what the counterfactual is, but I think the odds I’d be working on AI Safety right now would have been much lower without the i... (read more)
The first time I heard about effective altruism was when someone told me "you should check out your local EA community--it's where the cool people hang out". Indeed this turned out to be the case; I had met thoughtful, curious people, and gotten quite interested in the ideas of effective altruism.
When I moved a year later to a different city in a different country, I spent a few weeks lamenting the difficulty of meeting people, until I realized I could go have thoughtful and interesting conversations with wonderful people at the local EA community. In a way, it feels not unlike to churches in towns in the United States--you can move from one state to another, and yet the next Sunday you'll find a tight-knit community of people who approximately share a large subset of your values. And while in some denominations of Christianity, there's a large global hierarchical structure, as far as I am aware, there's no global fund for starting new churches (I may be totally wrong, I don't actually know much about this. Though I have heard of church planting, but even there, the new church "must eventually have a separate life of its own and be able to function without its parent body").
One of ... (read more)
As someone who has been deeply into community building for years (most of it outside EA), I am biting my lip yet upvoting this. I deeply agree that "Being an EA" as an identity has problematic implications, to say the least. While I have many thoughts, for now I'll just highlight what you wrote which for me is the most important: "[we should be] convincing individuals to consider the ideas, not to “join EA.”".
I'm not a member of the EA community, and in fact have been quite sceptical of it, but I do believe in the idea of altruism being effective, so I wanted to engage on a post like this. For context, I work on public health in developing countries, and have worked in a variety of fields in the traditional aid sector from agriculture to women's rights to civil society – in my observation public health is the most effective, followed by agriculture. While I'm sceptical of EA as a community, I do believe in some of the tenets, and even use services like GiveWell to guide my own donations. I wanted to ask the some questions, if you or community members are willing to answer – bear with me as I don't know the EA jargon that well.
One of the main reasons I'm sceptical is sort of a generalised scepticism of cliques, identities, and subcultures in general. Human beings are social animals, and we naturally seek status. So when a community/subculture forms, suddenly people seek status in it, seek to associate with its 'leaders' or popular causes, this short circuits the ostensibly rational analysis people think they're doing. Of course we don't know we're doing this, we think we're being p... (read more)
I appreciate this post, but pretty strongly disagree. The EA I've experienced seems to be at most a loose but mutually supportive coalition motivated by trying to most effectively do good in the world. It seems pretty far from being a monolith or from having unaccountable leaders setting some agenda.
While there certainly things I don't love such as treating EAGs as mostly opportunities to hang out and some things like MacAskill's seemingly very expensive and opaque book press tour, your recommendations seem like they would mostly hinder efforts to address the causes the community has identified as particularly important to work on.
For instance, they'd dramatically increase the transaction costs for advocacy efforts (i.e. most college groups) aimed at introducing people to these issues and giving them an opportunity to consider working on solving them. One of the benefits of EA groups is that it allows for a critical mass of people to become involved where there might not be enough interest to sustain clubs for individual causes (and again the costs of people needing to organize multiple groups). In effect, this would mostly just cede ground and attention to things like consul... (read more)
I don't know if an EA community is a good thing, but as a related point, I think it's worthwhile to share that I think the EA community as it currently exists, and in particular, EA leadership, has done a very poor job in advancing the interests of EA causes.
At present, EA has an awful reputation and most people view the community with contempt and it's ideas as noxious.
Candidly, I'm embarrassed to share any affiliation I have with EA to colleagues and non-close peers.
This didn't have to be this way and frankly, given the virtue of EA, it takes a special type of failure to have steered the community down this path.
I think EA would be significantly better served if a number of leading EA orgs and thought leaders dramatically reevaluated their role, strategy and involvement with EA.
I only realised this recently, but honestly I think most people are embarrassed to share almost any ethical (political, religious, philosophical, etc) affiliation with their colleagues and non-close peers. So I think that avoiding that is an unreasonable standard to expect or be aiming for.
I can see why you'd see things this way as an EA adherent. But I'm sure many members of various political parties, movements, religious groups etc feel the same - given that our ideas are so obviously incredibly virtuous, it's exceptionally bad that our leaders have nevertheless managed to make us look bad to most people outside of the community (i.e. the set of people who don't think it looks so good that they've joined).
PR is hard. Extremely hard. Otherwise you wouldn't have thi... (read more)
(on phone so quick thoughts) Thank you for writing this. It is very brave! I am actually quite sympathetic to your arguments. I would like EA to evolve over time from being a movement into being a boring norm of behaviour and reasoning etc. Just like how being a suffragette gradually failed to have any purpose. However, I think that doing so right now is a bit premature. I'd welcome some more debate though.
Interesting to see that people disagree with me. I am interested to hear why if anyone wants to share.
Hi David, I think I follow your thinking, but I'm not hopeful that there is a viable route to "ending the community" or "ending community-building" or ending people "identifying as EAs", even if a slight majority agreed it was desirable, which seems unlikely.
On the other hand, I vary much agree that a single Oxford or US-based organisation can't "own" and control the whole of effective altruism, and aiming not for a "perfect supertanker" but a varied "fleet" or "regatta" of EA entities would be preferable, and much more viable. Then supervision and gatekee... (read more)
Communities give hugely leveraged power to those who inform and represent them - you can get lots of people to change their minds or say "the community believes X"
While I think these powers are open to abuse, it's not to say that they aren't valuable.
A cost-benefit analysis seems appropriate here (and there have been a lot of costs recently) rather than suggesting that there are no benefits.
I think it’s interesting to explore far out ideas and I suppose it might makes sense from the perspective of someone focused on near-termism.
However, as someone more focused on AI safety, one of the cause areas that is more talent dependent and less immediately legible, this seems like this would be a mistake.
If the community is uncertain between the causes, I suggest that it probably wouldn’t be a good idea to dismantle the community now, at least if we think we might obtain more clarity over the next few years.
I think that AI safety needs to be promoted as a cause, not as a community. If you have personal moral uncertainty about whether to focus on animal suffering or AI risk, it might make sense to be a vegan AI researcher. But if you have moral uncertainty about what the priority is overall, you shouldn't try to mix the two.
People in Machine learning are increasingly of the opinion that there is a risk, and it would be much better to educate them than to try to bring them in to a community which has goals they don't, and don't need to, care about.
Honest question: isn't an option for the AI Safety community being just the AI Safety community, independent of there being an EA community?
I understand the idea of the philosophy of effective altruism and longtermism being a motivation to work in AI Safety, but that could as well be a worry about modern ML systems, or just sheer intellectual interest. I don't know if the current entanglement between both communities is that healthy.
EDIT: Corrected stupid wording mistakes. I wrote in a hurry.
I certainly think that having an academic discipline devoted to AI safety is an option, but I think it's a bad idea for other reasons; if safety is viewed as separate from ML in general, you end up in a situation similar to cybersecurity, where everyone builds dangerous shit, and then the cyber people recoil in horror, and hopefully barely patch the most obvious problems.
That said, yes, I'm completely fine with having informal networks of people working on a goal - it exists regardless of efforts. But a centralized effort at EA community building in general is a different thing, and as I argued here, I tentatively think this are bad, at least at the margin.
Philosophers have perhaps struggled with these impossible questions before,
https://brill.com/view/journals/rip/26/1/article-p25_2.xml?language=en
-- being new here, I have the feeling that for all of its love of applied utilitarian philosophy, this community is dissapointing in its lack of openness to other philosophical readings.
I understand that due to the recency and impact of events there is a work of mourning happening in this forum, and proposing healthy paths for this , if possible, is important to those who have belonged and formed affinities and conversations. The trajectory change has already happened, it just may not be recognizable yet.
A few thoughts, excuse the mess:
I think that local groups should continue to exist, they are what makes up the community. In my view, you can run a local group well without outside funding. In any case, your local group should not be dependent on outside funding.
I'd like to see CEA run a topic-specific conference.
I'd like to see more democratically organized EA structures, like the German https://ealokal.de
I'd like to see more grassroots community events by and for community members.
None of this would have prevented SBF from committing fraud, but we would feel fewer ripple effects in the community.
Which ones do you think are necessary?
Ombudsmen and clear rules about and norms of protection for whistleblowing, more funding transparency, and better disclosure about conflicts of interest.
(None of these relate to having a community, by the way - they are just important things to have if you care about having well run organizations.)
This hurts but it checks out.
I think there’s a kernel of truth to this suggestion! I would put it this way: EA should be global, and it should continue to be powered by communities, but those communities should be local and small.
First, work in specific cause areas should continue to happen globally, but should not operate with an automatic assumption of trust.
“Low trust” wouldn’t mean we stop doing a lot of good; it would just mean that we need to be more transparent and rigorous, rather than just having major EA figures texting with billionaires and the rest of us just hoping they d... (read more)
I think it's wise to separate the FTX and due diligence issue from the broader thesis. Here I'm just commenting on due diligence with donors.
Who was/is responsible for checking the probity or criminality of ...
(a) FTX and Almeda?
(b) donors to a given charity like CEA? (I put some links on this below)
(a) First it's their own board/customers/investors, but presumably supervisory responsibility is or should also be with central bank regulators, FBI, etc. If the CEO of a company is a member of Rotary, donates to Oxfam, invests in a football team, i... (read more)