Introduction
In this piece, I will explain why I don't think the collapse of FTX and resulting fallout for Future Fund and EA community in general is a one-off or 'black swan' event as some have argued on this forum. Rather, I think that what happened was part of a broader pattern of failures and oversights that have been persistent within EA and EA-adjacent organisations since the beginning of the movement.
As a disclaimer, I do not have any inside knowledge or special expertise about FTX or any of the other organisations I will mention in this post. I speak simply as a long-standing and concerned member of the EA community.
Weak Norms of Governance
The essential point I want to make in this post is that the EA community has not been very successful in fostering norms of transparency, accountability, and institutionalisation of decision-making. Many EA organisations began as ad hoc collections of like-minded individuals with very ambitions goals but relatively little career experience. This has often led to inadequate organisational structures and procedures being established for proper management of personal, financial oversight, external auditing, or accountability to stakeholders. Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations:
- Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009.
- Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019.
- Inadequate financial record keeping at 80,000 Hours during 2018.
- Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017.
- Similar problems reported at the EA-adjacent organisation Leverage Research during 2017-2019.
- 'Loose norms around board of directors and conflicts of interests between funding orgs and grantees' at FTX and the Future Fund from 2021-2022.
While these specific issues are somewhat diverse, I think what they have in common is an insufficient emphasis on principles of good organisational governance. This ranges from the most basic such as clear objectives and good record keeping, to more complex issues such as external auditing, good systems of accountability, transparency of the organisation to its stakeholders, avoiding conflicts of interest, and ensuring that systems exist to protect participants in asymmetric power relationships. I believe that these aspects of good governance and robust institution building have not been very highly valued in the broader EA community. In my experience, EAs like to talk about philosophy, outreach, career choice, and other nerdy stuff. Discussing best practise of organisational governance and systems of accountability doesn't seem very high status or 'sexy' in the EA space. There has been some discussion of such issues on this forum (e.g. this thoughtful post), but overall EA culture seems to have failed to properly absorb these lessons.
EA projects are often run by small groups of young idealistic people who have similar educational and social backgrounds, who often socialise together, and (in many cases) participate in romantic relationships with one another - The case of Sam Bankman-Fried and Caroline Ellison is certainly not the only such example in the EA community. The EA culture seems to be heavily influenced by start-up culture and entrepreneurialism, with a focus on moving quickly and relying on finding highly-skilled and highly-aligned people and then providing them funding and space to work with minimal oversight. A great deal of reliance is placed on personal relationships and trust in the well-meaning of fellow EAs. Of course reliance on trust is not bad in itself and is hardly unique to EA, however I think in the context of the EA community this has led to a relative disinterest in building sound and robust institutional structures.
Responding to Rebuttals
At this point I want to acknowledge some obvious rejoinders. First, I realise that governance has progressively improved at many of the older EA organisations over time. Nevertheless, based on my personal experience and reading of various organisational reports, as well as the obvious recent case of FTX, problems of weak governance and the lack of priority these receives in the EA community is still a major issue. Second, it is also true that many social movements or groups experience similar problems, and as I have no data on the issue I make no claim as to whether they are more or less common in EA compared to comparable movements or communities. Nevertheless, I think governance norms are still serious issues for the EA community, even if they are also issues for other groups or movements.
The Need for Better Norms
So what, specifically, am I proposing? Again, I want to emphasise the point of this post is not to critique specific organisations like CEA or 80k, or argue about what they should change in any particular way. Rather, my goal is to encourage people in the EA community to internalise and better implement some of the core values of good governance and institutional design. Some of these include:
- Accountability: Who is in charge? Who reports to whom, about what, and how often? Who is accountable for particular decisions?
- Consideration of stakeholders: Who is affected by the actions and choices of an organisation or project? How are their interests incorporated into decision-making? Is leadership adequately accountable to stakeholders?
- Avoidance of conflicts of interest: Are conflicts of interests present due to personal, organisational, or financial ties? What procedures exist for identifying and reporting such conflicts? Are stakeholders adequately informed about actual or perceived conflicts of interests?
- Decision-making procedures: What formal procedures exist for arriving at important decisions? How is stakeholder feedback sought and incorporated? What, how, and where are records of decision processes kept? Who makes which types of decisions, and how are they held accountable for them?
- Power dynamics: What procedures exist for protecting parties in asymmetric power relationships? Are there adequate opportunities for anonymous complaints or concerns to be raised? How are high-status individuals held accountable in the event of wrongdoing?
Some readers may see these principles as rather stuffy or irrelevant to much EA practise, but I think this attitude is precisely the problem. More consistent and considered application of these principles would, I believe, have significantly reduced the severity of many of the problems in EA organisations mentioned previously. While its not necessary for every local group or every small project to have elaborate institutional formalisms or extensive documentation, practising the principles of good governance is in my view valuable for everyone, and should be something we regularly consider and discuss as EAs. I am not saying we should forget the values of dynamism or tight-knit communities, or that we should all turn into bureaucrats. I am saying that as a community, I don't think we take good governance seriously enough or spend enough time thinking about it. Also it should go without saying that the more power and responsibility a person or organisation acquires, and the more money they have stewardship over, the more important these principles become.
My overall message, therefore, is that good governance matters, strong institutions are important, and relying extensively on informal interpersonal bonds is often insufficient and prone to problems. I hope the EA community can continue to learn from its mistakes and seek to better internalise and actualise the values of good governance.
Hi, I'm pretty new here, so pls correct me if I'm wrong. I had, however, one important impression which I think I should share.
EA started as a small movement and right now is expanding like crazy. The thing is, it still has a "small movement" mentality.
One of the key aspects of this is trust. I have an impression that the EA is super trust-based. I have a feeling that if somebody calls themselves EA everybody assumes that they have probably super altruistic intentions and most of the values aligned. It is lovely. But maybe dangerous?
In a small movement everybody knows everyone and if somebody does something suspicious, the whole group can very easily spread the warning. In a large groups, however, it won't work. So if somebody is a grifter, an amoral person, just an a*hole or anything similar - they can super easily abuse the system, just by, for example, changing the EA crowd they talk to. I have an impression that there was a push towards attracting the maximum number of people possible. I assume that it was thought through and there is a value added in it. It, however, may have a pretty serious cost.
Liv -- I agree that this is something to be very cautious about.
I have psychology colleagues who study psychopathy, sociopathy, and 'Dark Triad' personality traits. The people with these dangerous traits tend to selectively target smallish, naive, high-trust, do-gooding communities, such as fundamentalist Christian churches, non-profits, idealistic start-ups, etc. -- and maybe EA. The people in these groups tend to be idealistic, forgiving, trusting, tribal, mutually supportive, and inexperienced with bad actors. This makes them highly exploitable -- financially, sexually, socially, legally, etc.
Psych professor Essi Viding has a nice 'Very Short Introduction' to psychopathy here, which I recommend.
The best way to guard against such people is to learn about their traits and their typical social strategies and tactics, and then to, well, keep up one's guard.
(I'm not implying that SBF is a psychopath; it's not feasible or responsible to make a diagnosis of this sort without knowing someone personally.)
Thanks for the comments! I also wanted to clarify one thing - I'm not talking only about super serious cases - i.e. criminals or abusers. I think that a much more common system failure would be to over-trust "small" grifters who live from one grant to another, people who don't keep secrets including professional secrets, those who are permanently incompetent and unwilling to change that etc. I think that those people, if not "caught" early enough can also cause a lot of trouble and minimize impact of even the best initiative.
Also, you know, it's absolutely great to have a feeling that you can trust all people in the room you are in. I think there's a huge value in creating such environment. But I have no idea how to do that in case of EA - it seems to big, to rich, and growing to fast. I guess in this case some super effective system would be needed, but again, I don't know. Maybe, sadly but very possibly, it's impossible in case of such an environment - if yes, we need to adjust our assumptions and behavior, and probably we should do it fast.
I don't have much to add but I found this exchange super interesting, thanks for that.
As someone who's worked in the mental health field, the other interesting thing I've read about ASPD/psychopathy is that they heavily rely on cognitive empathy over affective empathy, which ... is actually a very EA trait in my opinion.
So even without nefarious intentions, I think people with ASPD would be drawn to/overrepresented within EA.
I felt a bit stressed when I saw that the discussion turned into talk about ASPD, and now I realized why.
Firstly, we should hold accountable all people who display unwanted behavior, doesn't matter their diagnosis. I'm afraid that the focus on ASPD will shift our attention from "all abusive/offensive/deceitful behaviors shouldn't be accepted" to "let's be careful if somebody has ASPD". I think that focusing on (especially repeating) behaviors and consequences is a much better strategy here.
Secondly, it's hard to diagnose somebody, and doing so in a non-clinical setting is unethical and very hard, so if we start worrying about letting people with ASPD "into EA" we have no way to actually prove or disprove our point. But some people may end up trying, and home-made psychoanalysis is well, not good.
So, to summarize - I personally just think that shifting the focus from "how to trace overall unwanted behavior" to "if EA may attract people with ASPD" may yield worse results.
Yeah, I agree. The only reason I even engaged is because a psych I saw noted down that I show signs of it, and I roll my eyes whenever psychopathy pops up in a discussion cause people just use it as a synonym for malicious.
Reading on ASPD, it's kinda weird how people read "15% of CEOs and criminals have ASPD" and think "ASPD is the criminality and corruption disease" instead of "85% of people we should watch out for are totally capable of abuse with a perfectly normal brain, so our processes should work regardless of the offender's brain".
IDK, just really weird scapegoating. The original point was pretty much just about "malicious bad-faith actors" and nothing to do with ASPD.
Most fraudulent activities were done by normal people that rationalized their way when opportunities or gaps presented to them and they happen to need the financial gain.
Interesting. I would have said the opposite -- that many EAs are what Simon Baron-Cohen calls 'high systematizers' (which overlaps somewhat with the Aspergers/autism spectrum), who tend to be pretty strong on affective empathy (e.g. easily imaging how horrible it would be to be a factory-farmed chicken, a starving child, or a future AGI), but who are often somewhat weaker on cognitive empathy (i.e. using Theory of Mind to understand other people's specific beliefs and desires accurately, follow social norms, communicate effectively with others who have different values and assumptions, manipulate and influence other people, etc).
I agree that psychopaths rely much more on cognitive empathy than on affective empathy. But in my reckoning, this Aspergers-vs-psychopaths dimensions implies that EA includes relatively few psychopaths, and relatively many Aspy people (like me).
(FWIW, I'd recommend the recent Simon Baron-Cohen book 'The pattern seekers', about the evolutionary origins and technological benefits of high systematizing.)
But people in EA think a lot about how to reduce the suffering of other people, and give a great importance to morality and doing the right thing, which is the exact opposite of sociopathy. I think that this is more important than if people are heavily "cognitive" in the community, and people with ASPD should be underrepresented. Moreover, a lot of people seem to be motivated by affective empathy, even though they try to use cognitive empathy to then think about what is best.
Agree. I don't know if you meant this too, but I also think that focusing on one particular person who manages to have a lot of influence among the fellows of his or her local EA group/organisation, or generally creating a kind of cult of personality on a few leading personalities of the movement, can be dangerous in the long run. SBF is a kind of example of the unilateralist curse somehow.
I didn't have that in mind :). But let me think about it.
Maybe there's something to it (my epistemic status is - I just thought about it - so please be critical). The majority of the EA community consists of super young overachievers who strongly believe that one's worth needs to be proven and can be measured. There is also a small portion of the community which is much older, mature and simply impressive. I don't know if it causes the community to be cultist, but it may enable it.
I personally don't feel any "authority idealization" vibes in the EA, rather quite the opposite. I have a pretty strong intuition that if I wanted to disagree with some EA "celebrity" I would be more than encouraged to do so, treated as a thought partner (If I have good arguments) and thanked for a valid criticism. I also believe I could approach any EA person and just chat to them if we both wanted to chat, because why not, and if in the process I learn that this person is famous, well, ok, wouldn't change the tone of the conversation. That being said, I have a pretty strong personality myself and I'm not intimidated easily. Plus, I'm in my late twenties, so older than the majority of the EA new joiners, which may be important here.
I don't think that creating celebrities and hierarchies is avoidable, I don't believe that saying that some people are impressive is bad. I also think that it's super hard to stop people idealizing you if you are a leader, especially when internet and community structure allows random people to have some insight into your private life. I also believe that if somebody keeps idealizing celebrities, a good "first step" is to seriously reflect on that schema and work on ones mindset first. I would not shift the blame on the "celebrities" or "community" only, because if the schema of "authorities" exists, the first step to break it is to make "fans" more self-aware, self-sufficient and causative.
I however think that the topic is worth investigating and chatting about. All of the above being said, celebrities should take responsibility for their power. Blogs and websites should avoid creating idealized portraits of the leaders. Everybody should have equal right to speak and disagree with a "head" of any organization, and everybody should be equally criticized in case of saying untrue statements or any wrongdoing. Active idealization should be treated as a bias - because it is a bias - so a mistake to work on. Finally there definitely should be systems which could stop those with more power from abusing it in case they try to.
Do you actually know if somebody checked to what extent "being cultist" is a problem in EA? And if it's more than in any other group? I wonder what would be a result of such a research.
I'm getting the same read Liv.
I expected when I came here to be honest that a movement as big as this has some notion of best practices as far as governance is but unfortunately it was not, what is installed seems erring on the side of investigation and rather than detection or prevention.
EA has to add some parts of how scaling issues had been addressed by big multibillion industries. I understand that the movement is still new to this but again, fraud has been there forever where money is in play and human nature is involved.
Strong upvote for “EA still has small movement mentality”.
The appropriateness of diverting lots of resources from object level impact to good governance structures depends on how much resources a movement has overall, and I don’t think EA has appropriately adapted to the scale of its size, wealth and influence.
With great power comes great responsibility.
I think this is a very helpful post.
I think some of the larger, systemically important organisations should either have a balance of trustees and/or a board of advisors who have relevant mission critical experiences such as risk management, legal and compliance, depending on the nature of the organisation. I appreciate senior executives and trustees in these organisations do seek such advice; but often it is too opaque who they consult and which area the advice covers; and there could be a lack of accountability and risk of the advisors lacking sufficient knowledge themselves.
I have raised this directly a number of years ago but perhaps still inadequate. As noted by others this becomes more important as we get bigger.
Ps I don’t post much and not as accurate with my choice of words as other forum users.
Luke, I completely agree. In fact, I'm trying to do something about it.
Can confirm that Luke was a huge proponent of this from our interactions from ~2016 – ~2019. It's one of the primary reasons Rethink Charity created and maintained our governance structure, which I thought was only moderately good but likely above average relative to what I've seen and heard about in the community
Agree. I'd also add that this is a natural effect of the focus EA has put on outreach in universities and to young people. Not to say that the young people are the problem--they aren't, and we are happy to have them. But in prioritizing that, we did deprioritize outreach to mid and late-stage professionals. CEA and grantmakers only had so much bandwidth, and we only had so many people suited to CB/recruiting/outreach-style roles.
We have had glaring gaps for a while in ability to manage people, scale programs, manage and direct projects and orgs, and perform due diligence checks on and advising for EA organisations. In other words, we lack expertise.
I'd say 80K has been somewhat aware of this gap and touched on it lightly, and the community itself has dialled in on the problem by discussing EA recruiters. Yet CEA, funders, and others working on movement-building seem to repeatedly conflate community building with getting more young people to change careers, revealing their priorities, IMO, by what they actually work on.
Open Phil has done this as well. Looking at their Effective Altruism Community Growth focus area , 5 out of the 6 suggestions are focused on young people. The sixth is translating EA materials, so all options to directly work with promising people are focused on young people. Admittedly there is a form to submit other ideas, but given what looks like a ~5/5.5 base rate of thinking youth are worth mentioning where they could have mentioned something else, I'm not hopeful they care about non-youth interventions. When I look at that page, I cant help but think, "So, are we building a sustainable movement and network of professionals, or are we essentially running disjointed Thiel fellowships?"
Things I'd like to see to increase expertise and experience in EA (in addition to new roles, interventions, and EA orgs focused on improving governance in EA):
I hope it's not moot to discuss funder priorities now: I'd like to see funders and grantmakers overtly acknowledge that we need expertise from professionals outside the EA movement. The expertise bottleneck is the other side of the coin to the operations bottleneck, and it never got addressed. I'd also like to see blunt transparency of their reasons to have not prioritized or called for people to work on building experienced professional leadership and leadership assistance in EA orgs. If reasoning is made overt, perhaps we can workshop it, eg, if they think EAs are bad at talking to late-stage professionals, we hire someone who is good, or we ask a couple promising comms EAs to go through some class on recruiting executives.
I'd also like to see EA individuals and orgs themselves take on this mantle of increasing expertise in EA. It feels like EA individuals have been saying this for a while but very few have been trying to solve the problem. Charity Entrepreneurship could even add charities with related missions to their incubation program. It can't be that different from Animal Advocacy Careers which they incubated already.
I'd also like to see more nuanced terms than "community building" or "movement building" to better clarify what is being prioritized under the hood. Governance-building, professional network-building, and direct-worker building all have different focuses (and I could name so many more). I think the vagueness of the CB term could be responsible for a lot of our gaps, and also responsible for the lack of promising outcomes CB grantees might have seemed to yield, from the point of view of funders and grantmakers.
Very much agree on everything you said. I have shared my opinions on this a lot in the past days and happy to see that more and more people see the issue with scaling without a structure built around
This is a terrific post, thank you! And it’s generated some excellent discussion too. I particularly agree with Ivy’s thoughts about the effects of focusing on recruiting young people (I’ve discussed some similar things here and here) and Tobyj’s comment about how “more work needs to be done on governance at the ecosystem level” vs. the level of specific organizations.
In my opinion, the case for increased attention to governance is even stronger because your list of examples omits some of the community’s largest historical and ongoing instances of weak/unprioritized governance:
I think this is one of the most important comments I've seen on the forum since I joined.
For me personally, both it and your post on CEA have valuable information that I didn't know prior to reading them.
I think tackling these problems (some of which have easy solutions, like making boards larger and more diverse and independent) should be one of the top things the EA community starts demanding from these orgs.
I’m gratified you find my contributions helpful Guy, thank you for the positive feedback.
I’m somewhat skeptical that there are a lot of easy solutions here. “Making boards larger and more diverse and independent” would help in my view, but a lot depends on procedural factors (how new board members are selected, who has input into that process, what the board’s mandate is, etc) as well as cultural factors (how empowered the board feels, how much the organization engages with the board, whether stakeholders feel represented by the board, etc.) I’d argue that the advisory panel CEA created in 2017 is a good example of how simply adding more perspectives isn’t sufficient; the community had no input into the board’s composition or mission, and eventually the panel seems to have just petered out pretty quickly as CEA appears to have stopped consulting it.
In my opinion, one of the best easy starting points would be for EA organizations and individuals to investigate the EA Good Governance Project, which seems well positioned to improve governance of specific EA organizations.
The trickier, and probably more important, task is to improve community level governance. In this space, I’d argue the highest priority is starting to address the question of how governance should work at Effective Ventures . This is not an easy question, but it is a critical one due to a) the importance of EV’s work to EA; b) the implausibility (in my opinion) of EV’s current board providing good oversight to its wide-ranging projects; c) EV operating certain projects on behalf of the broader community (e.g. community health, EAG, effectivealtruism.org); and d) allegations that EA leaders including multiple EV board members were aware of SBF behaving unethically at Alameda in 2018, potentially misaligning the incentives of EV and the EA community.
Some of these factors certainly apply to other organizations. OpenPhil’s work is obviously critical to the EA community so (A) applies, and I suspect (B) does too. But OpenPhil hasn’t assumed responsibilities from the EA community so (C) doesn’t apply, which makes me somewhat sympathetic to the argument that it’s Dustin and Cari’s money and they can do whatever they want with it. So the combination of factors above is really why I think the broader governance discussion needs to start with EV.
However the EA community decides to pursue better governance, I hope it leverages existing expertise on the subject and avoids notions of “EA exceptionalism” (which I think is always problematic but particularly bad in a context like governance where EA has a poor track record). Brigid Slipka’s resignation letter from GiveWell’s board includes links to several resources, and includes a thoughtful framework of her own. I think that framework is useful for understanding EV’s governance; I’d characterize EV’s programs as in the “adolescent” stage progressing toward “mature”, while the board setup seems like it’s still at the “start-up” stage.
Who am I? EA aligned private equity investor / donor who has tried to become more involved in EA for almost a decade.
Ok. These incidents are all new to me but they’re all very bad.
Let’s separate two concepts: governance and controls. Governance is a whole bunch of structure and decision things including what boards of directors do. Controls are a whole bunch of policy and procedure things any and every organization should have to not have occasional very bad things happen. They include things like limits on account access, signatories, cash reserves and investment policies, independent auditors and, critically, audit expertise and the board level via an audit committee. Maybe we all know this but I wanted to separate them because:
I have been in a number of meetings where EA company/org leaders express concerns around establishing or expanding their boards because of “alignment issues” around potential members. I get it (kind of) - it’s hard to put a board together with the right expertise AND devotion to the cause the way you like to see it. I have heard similar sentiment around bringing in finance/accounting professionals. Anyway, a key point: if you are paranoid about mission drift and tight control by aligned founders, you can create an entire board infrastructure of non voting or ex officio members to serve on important committees like audit and governance etc. I have done this a few times at normie nonprofits: it’s a win for me because I get to contribute to something I care about without taking on the time commitment and liability associated with joining a board.
You all may know this stuff already, but I’ve been surprised how weak EA boards look from the outside: a handful of founders and the same 5-10 people from Oxford and adjacent circles… not ideal and frankly a huge red flag as a prospective donor/partner etc.!
Seems this is a good place to mention the EA Good Governance project.
Exactly this.
It's not a scalable solution to be doing analyses to investigate every source of funding in the ecosystem. It is scalable to have an overarching framework of governance requirements that must be in place for the institutions (FTX, CEA, etc.) that we interact with. Mandates around transparency in budget, inclusion of multiple stakeholders on boards, whistleblower infrastructure, etc.
I see this as a symptom of a larger problem that exists within the movement, which others have made light of over the past few days: We think we're special.
Sure, we are approaching a different set of problems, often in a pretty substantially different way, with epistemic norms that we aspire to, etc.
But at the end of the day, I think we often write off the fact that we are all also human. Whatever happened at FTX, be it mistake or mislead, is a human error, and it's core to human nature. That's why companies have boards. Because when we have no oversight, sometimes dumb stuff happens.
I also see this reflected in movement-building dynamics. EA is ultimately a social movement. Sure, it's lots of other things too, but at the end of the day, if people don't like EA's, or don't feel socially accepted here, they will leave. I feel like we often over-rationalize this part of the movement and think that if we can only show enough people enough right arguments, then they will understand and join. But really, the growth dynamics of our movement are not too different from those of any other social movement. There is a lot that we can learn from this, from others' mistakes, and I fear that we will continue to think that we are different and not learn the lessons that history can teach us.
Despite what some of us might hope, we are people just like all the other people in the world. So let's acknowledge that and work with it!
This is a very good point ! Thanks.
Reminds me of the "Skin in the game" concept by Taleb : Systems don’t learn because people learn individually. Systems learn at the collective level by the mechanism of selection, by eliminating those who are unfit and eliminated.
Now eliminate is a bit strong wording - but FTX got "eliminated" in this fashion, so we ough to be careful here. Bankrupcy is not the only outcome, neither is firing someone (as you said, many people can learn), but the important point is that there is a need for an internal structure that prevents incompetent actors from reaching the top, or at least to have ways to limit their power to do stuff unliateraly - otherwise the structure itself may be threatened.
By "incompetent", I mean not fit for a management and leadership position (one can still be good at something else like analysis). For instance, I find myself to be incompetent when it comes to governance, management or finance. There needs to be a mechanism to prevent people like me from getting to a management position (at least, until I got much more experience).
So having good institutions and systems of governance is crucial. Poor behaviour is mostly avoided by having mechanisms that counter these behaviours- so we have a lot to learn here.
Holden Karnofsky has some interesting thoughts on governance:
https://forum.effectivealtruism.org/posts/c3y6khh7mxiWrDyeb/nonprofit-boards-are-weird
https://forum.effectivealtruism.org/posts/hxTFAetiiSL7dZmyb/ideal-governance-for-companies-countries-and-more
One theme is that good governance isn't exactly a solved problem. IMO EA should use a mix of approaches: copying best practices for high-stakes scenarios, and pioneering new practices for lower-stakes scenarios. (For example, setting up a small fund to be distributed according to some experimental new method, then observing the results. EDIT: Or setting up a tournament of some kind where each team is governed according to a randomly chosen method.) Advancing the state of the art doesn't just help us, it also seems like a promising cause area on its own.
Here are some forum tags that are potentially relevant:
https://forum.effectivealtruism.org/topics/nonprofit-governance
https://forum.effectivealtruism.org/topics/improving-institutional-decision-making
https://forum.effectivealtruism.org/topics/effective-institutions-project
As someone who's been around the scene for a while, happy to try providing some takes here:
1. In general, I agree this area is important, and improvement here could be beneficial. That said, it's tough to do well (as with most important things).
2. My take is that the EA/rationalists nonprofit scene, before ~2019 or so, didn't have the best senior operations/management talent. One primary reason is that funding was tight and precarious. Setting up a 501(c)(3) and managing it is a major pain. The EA/rationality scene attracted a bunch of young people with research-type skills. These people typically really dislike doing operations and don't have great networks of experienced manager types. Over time, the funding situation and the EA-adjacent talent pool have both improved a lot. (But of course, we still have bottlenecks here).
3. I think it's easy to compare EA orgs with either top tech companies, top political organizations, or whatever one could imagine being really amazing and then be underwhelmed in comparison. I'd encourage critics to try to find more reasonable comparisons. EA orgs typically offer much lower salaries and are less prestigious than top tech companies. They're very different from political groups. I'm legitimately curious here. I'm not sure what examples are best for us to emulate. Many EA groups look up to 80,000 Hours and Open Philanthropy as examples of well-run nonprofits.
4. One tractable question could be - "what percentage of a nonprofit's resources do you think they should dedicate to governance improvements / self-evaluation"? Organizations typically don't exactly have the choice to be "really good or not", but they do have the option to just allocate more resources to this sort of thing, if donors approve. I guess few donors would want this to be greater than 20%, but I'm not sure.
5. Related to (3), many non-EA nonprofits seem pretty mediocre to me, likely in large part due to mediocre funding incentives and amounts. I've dug to find clever voices in this area and have been underwhelmed. I think the median organization in the US is just not that great when it comes to governance, and I could easily imagine nonprofits being worse than average businesses.
6. This might be a nitpick, but the 80k accounting issue, specifically, didn't actually seem terrible to me? Like, out of all the mistakes they point out on that page, that doesn't seem the biggest to me. Plus, I can't think of other orgs with mistakes pages that are better than that. (Most orgs don't even have any mistakes pages)
7. I'm a board member of two EA nonprofits and run an EA nonprofit (QURI). I've dug for great resources or advisors for nonprofit management (maybe 2-4 solid weeks total, plus more for narrow skills), and have yet to be very impressed. Some EAs have been trying to invent their own patterns from scratch (i.e. Mistakes pages), due to this lack of being able to find great outside experience. There's a bunch of bad managerial consultants and materials out there - I'm sure there is some great stuff too, but at least I don't feel like I've been able to discern it yet.
8. I'd love it if we could find ways to be more honest about these situations. Like, "from 2014 to 2016, our organization was only able to get C+ talent to be in charge of our bookkeeping. Given our many constraints at the time, we thought this was a reasonable move.". However, given cultural norms, it's unclear how we can say things like this without invading someone's privacy or similar. I think if you have conversations with organization leaders, they will generally be honest about their organization's limitations, but it's awkward to discuss publicly online.
I hope this doesn't sound too much like I'm giving excuses for these organizations. I'm trying to provide a better model for why these challenges exist.
I intend to spend a fair bit more time at this point investigating issues of governance. Doing so of course, means that I'll spend less time doing object-level tasks in the near term.
If people reading this are interested in helping, more posts/work on this topic could be great.
I agree with this substantively, agree that it's a bit unreasonable to think EA orgs should be similar to large corporations, but we should aspire to do this well - the status quo for small nonprofits is pretty dismal, and we should at the very least be better than peer organizations, and ideally even better than that. I also think that this is a great area to work with Ian David Moss, who has a lot of experience in the area, and is well connected with other experts.
Ozzie -- this was a good and constructive post.
One possible analogy might be for EA organizations to aspire to something like the structure of a well-run academic department, where there's typically a distinction between (1) professors who are mostly focused on research, and promoted into decision-making positions (e.g. area head, director of undergrad education, committee chairs, department chair) that they're not particularly well-trained for, and (2) staff who actually run all the operations, logistics, finances, grants, front offices, websites, etc., and who may not be paid as much as at tech companies, but who enjoy some prestige of working at a university.
As every experienced faculty member knows, without good department staff, everything falls apart.
That's a good point. I'm not very familiar with academic departments, particularly well-run ones. (I liked FHI, but the greater Oxford bureaucracy had issues).
By chance are there any examples of really good departments? Or writing on how to make a good academic department? I imagine this is a niche topic, but it seems like an important one.
Ozzie - I don't know of any good writings about what makes for a good, high-functioning academic department.
I'm speaking mostly from personal experience of having worked in a dozen academic departments in several countries over the last few decades. Generally speaking, there seems to be less variance in the quality of 'support staff' (e.g. front office, finance, etc) than in the quality of faculty leadership. Most staff seem 'pretty good' or better. Staff typically have their own hiring protocols, promotion processes, and job security norms, quite separate from tenure-track faculty. Staff often switch between departments, since running grant finance oversight for a psych department isn't that different from running it for a physics department (for example). Also, staff typically don't need to have much intellectual, emotional, or social investment in the 'cause area' or research topics that a department addresses; they often seem to feel rewarded simply by being paid well, having reasonable job security, being respected by faculty and students, getting along with other staff, and being associated with a generally prestigious organization.
So, I think it's useful for EA organizations to spend a fair amount of time and effort thinking about how to recruit and retain great staff. Maybe they have already; I don't know.
Really helpful contribution: focuses on the key issues, balanced, evidenced and has concrete next steps. Thanks.
I broadly agree with the claims in this post in general, but am not convinced that the FTX collapse is evidence for them? I don't think FTX could reasonably be counted as an EA org - its senior people were EAs but most employees were not, it was aggressively optimizing for making money, it was a move fast + break things (too many things...) startup. I think that even if EA had much better norms of governance, I doubt this would have affected FTX's governance, though I agree that FTX's governance was clearly bad.
I'm open to the argument that the Future Fund was badly governed, but that feels non obvious and hard to tell to me - it was maybe a screw up that many EA orgs took Future Fund money in a way that left them vulnerable when it was abruptly cut off, though I think this outcome was genuinely hard to see coming. And I doubt they had much say into what was going on at FTX.
I am still curious what changes have been made or adopted ever since FTX crisis on good governance part within the EA community and EA affiliate organizations including how they:
Any links/reports of the new changes/policies ?
Thank you for adding this James. I think you clearly articulated some important points. Last week's events are going to give the EA community a lot to reflect on and learn from. I feel this is one of those growth moments that, because it is so unusually difficult and painful, gives an important opportunity to collectively re-examine just how valuable some historically under-appreciated organizational structures can be.
The Effective Altruism community is a young organization in all meanings of the phrase and, despite some challenges like this that can arise as a result, I still believe that it has extraordinary potential to learn, nimbly evolve, and grow in ways unparalleled by any other organization I know of. I hope we can continue to value mistakes as learning opportunities and use them as signposts where we need to direct our attention for future development.
I agree that good governance is important, but I'm bemused that your source for principles of good governance is the Council of Europe. Though it's not identical with the EU, I'm skeptical that an EU-affiliated institution is a good source to take lessons about good governance. Also, the list is about democratic governance, and hence not particularly relevant to businesses or nonprofits.
More generally, there's a significant tradeoff between doing stuff and having oversight. (See e.g. this Vitalik Buterin post on the bulldozer vs vetocracy political axis.) Many big institutions are very far on the oversight end of the spectrum, and are hence very slow to act in any situation. Conversely, startups are probably too far on the doing-stuff end of the spectrum, but for good reason.
That said, in a lifecycle of institutions, it makes sense for the surviving ones to become more bureaucratic and professionalized over time. Paul Graham:
And separately, it's easy to pay lip service to good governance, but hard to put it into practice. For instance, almost all Western democracies use highly suboptimal voting systems.
Plus in practice a lot of oversight is just CYA, which can incidentally be soul-crushing for the employees who have to implement it. (E.g. consider complaints by doctors about how much time they have to spend on administrative tasks rather than taking care of patients.)
Hi Tobias,
I believe the good governance mentioned in this post and the context of it is not for public or government institutions but is for organizations that is geared for delivering services and is privately managed.
All the best,
Miguel
I'm referring to the source referenced in the Need for Better Norms section:
->
Ow okay, missed that. Thank you.
Thank you for this post! One thing I wanted point out was that, this post talks about governance failures by individual organizations. But EA orgs are unusually tightly coupled, so I suspect a lot more work needs to be done on governance at the ecosystem level.
I most recently worked for a government department. This single organsiation was bigger, more complex, and less internally value-aligned than the ecosystem of EA orgs. EA has fuzzier boundaries, but for the most part, functions more cohesively than a single large organisation.
I haven't thought a tonne about how to do this in practice, but I read this report on "Constellation Collaboration" recently and found it compelling. I suspect there is a bunch more thinking that could be done at the ecosystem level.
Regarding your examples from the "Weak Norms of Governance" section:
1 - As I understand it, wasn't that a crime committed by an accountant and CFO (see the embezzlement charge here), i.e. by the kind of person you hire for oversight? How does that affect your conclusion?
4 - Surely the main lesson here is "don't update on, and maybe update against, criticism that sits on 75 karma despite 193 votes", especially when tons of comments have several times that karma.
I'm not familiar with the other situations.
(The comment above went from 6/5 karma to 0/-1 karma with no commentary.)
On 4, my impression of the controversy over this piece is just that it makes comparisons to both Leverage, and standard start up culture, in a way that seemed inapt and overly generous to Leverage to many people. The on the ground facts about MIRI in it are mostly undisputed from what I can tell, and many of them are governance issues worth criticizing, and relevant to a post like this.
They absolutely are in no way undisputed, and I don't understand why anyone would possibly think that, given that the post has a crazy 956 comments. And again, there's a reason why the post is much much lower-karma than tons of the comments.
In fact, I had the opposite impression: that the author saw Zoe's legitimate grievances about Leverage and drew spurious parallels to MIRI.
Which parts? I completely agree that the controversy is in large part over comparisons to Leverage, and that there is a great deal of controversy, but I'm not aware of a major factual point of the piece that is widely contested. Much of the post, where it gets specific, concentrates on things like internal secrecy to avoid infohazards, MIRI thinking they are saving the world, and are one of the only groups putting a serious effort towards it, and serious mental health issues many people around these groups experienced, all things I think are just true and publicly available. I also take it that the piece was substantially playing down the badness of Leverage, at least implicitly, for instance by invoking similarities between both and the culture of normal start-ups and companies. Much of the controversy seems to be over this, some over the author's connections to Michael Vassar, some over interpretations of facts that seem much less sinister to others (like the idea that if the author had been open about paranoid fantasies with MIRI employees, they might be disturbed and even try to report her for this, which others pointed out was pretty normal and probably more healthy than the Leverage approach Zoe described). I'm not saying that none of the controversy was related to contested facts, or that everything in the piece is on the ball, just that you seem to be giving it too little credit as an account of governance/culture problems worth considering based on what I see to be a fairly superficial reading of karma/comment count.
IIRC commenters disputed whether / to which degree MIRI's secrecy & infohazards policy was in any way worse than typical NDAs for big companies.
IIRC re: Michael Vassar, the problem was not so much the connection to him, but that several people around Vassar (the author included) had experienced drug-induced psychosis, which made their criticisms and reported experiences suspect. My sense of the post was that it described innocuous facts and then considered them to be bad by analogy to e.g. Leverage.
Re: mental health, I agree that the MIRI world view is likely not good for one's mental health, but I wouldn't consider that a "ground fact" about MIRI, but rather (assuming one buys into that worldview) a problem with the world being the way it is. For instance, it sure would be better for everyone's mental health if AI alignment were universally agreed upon to be trivially easy, but unfortunately that's not the case.
I read a significant fraction of the comments in that thread when it first appeared (though not all of them). I'm stressing those data points so much because that thread is still getting cited to this day as if it's undisputed, legitimate and broadly community-endorsed criticism, merely because it has positive karma. Hence I think stressing how to interpret the karma score and number of comments is a very important point, not a superficial one.
To their credit, the EA and LW communities love to question and criticize themselves, and to upvote all criticism. Unfortunately, that lends credence to weak or epistemically dubious criticisms far beyond what would be merited.
I should emphasize that I agree with the point about mental health here, I more noted it as one of the major points of the post that was not really disputable. If MIRI is one of the only orgs making a truly decent effort to save the world, then that's just the way things are. Dealing with that fact in a way that promotes a healthy culture/environment, if it is true, is inherently very difficult, and I don't blame MIRI leadership for the degree to which they fail at it given that they do seem to try.
Thanks for writing this--I largely agree. One thing I have been thinking about a lot in the scandal is Harry Potter and the Methods of Rationality. For those who haven't heard of it, it is a very popular book in EA circles based on the premise of "what if Harry approached the magical world with a rational spirit of scientific inquiry." It's one of my favorite books of all time, and I am in no way blaming Eliezer Yudkowsky (the author) for any of this mess.
However, one of the messages that I at least take away from the book is "you are personally responsible for your failure to improve the world. This includes your failure on account of waiting for permission from authority or approval from society or from feeling like the thing you are failing to do has too many evil-ish vibes even though it's actually good." I think the attitude that people should be personally ambitious to improve the world and that they can do it themselves without approval from society (or a board of directors or an IRB or anyone) is maybe a little too strong in EA circles. I am also personally pretty sympathetic to this attitude--sometimes a board of directions and an IRB really are stopping you from making the world a better place and sometimes you really should ask yourself if you really need them from a practical and a moral perspective. But it just seems one should be aware of the virtues of occasionally thinking "yes I could technically do something about this but no, I am not personally responsible for unilaterally changing the circumstances, but golly it just really depends on the situation and I should think long and hard about this and maybe get some other perspectives."
To be fair, HPMOR also has the message that "you are personally responsible for when you try to improve the world but then fuck everything up because you were so reckless" and also it's just a work of fiction and this is only my interpretation of it. Also you should read it if you haven't because it's really good.
What a useful discussion of a challenge to this and other aspirational communities. As I read the comments, many of them reflect the need for a virtue I highly value - humility. Humility leads to double checking what you do, not assuming it is always right. Humility creates the desire to confer and learn from others. Humility acknowledges that other perspectives may be helpful in achieving the most effective result. In an organization it leads to creation of checks and balances, even when they may not seem essential
Humility is not a road block to progress. To me it is an essential part of achieving broad and effective results. When I was young, I often “knew best”. Sometimes that let me forge ahead and succeed, not being slowed by considerations beyond my vision. And that success often multiplied my sense that I was being effective, further limiting my ability to listen to differing voices. As I look back, I see how things could have been done more powerfully if I had exercised a little more humility.
I think a practical intervention here would be outlining how much governance should be in place at a variety of different scales. "We employ 200 people directly and direct hundreds of millions of dollars annually" should obviously have much more governance-structure than two people self-funding a project. A claim like "by the time your group has ten members and expects to grow, one of them, who is not in a leadership role themselves, should be a designated contact person for concerns, and a second replacement person as socially and professionally distant from the first as practical should be designated by the time your group hits 30 people." I expect explicit growth models of governance to be much more useful than broad prescriptions for decision-makers, and to make explicit the actual disagreements that people have.
I can't agree with this post more. I raised concerns about the number of times I heard those placed into leadership positions of new EA spin-offs/projects relaying that they felt they didn't have sufficient oversight/ reporting mechanisms in place to get the best out of these programmes or even their talents. Idealistic as we all are, big bucks need big governance - it's as simple as that.
So let's get scientific about all this. It is now absolutely essential that EA makes the a-priori publication of impact metrics, programmatic design and oversight mechanisms with absolutely all affiliated activities business as usual. It is too easy to get creative with this accounting if done retrospectively - reach gets inflated and declared financial waste gets minimised/ swept under rugs. If we want the public to ever trust us with their funds again, we have to show all our workings and we have to be open about our failings. The only way of cleaning up the reputational damage of our FTX affiliation is concerted effort around external and objective oversight/audit (annual charity commission reporting is not sufficient) and layered internal governance via boards and committees and, yes, I'm afraid a metric tonne of bureaucratic paperwork.
EA now has brand-value and must act accordingly: it cannot shirk responsibility as 'an ideology' or even as an accelerator perhaps can. Any negative association is now our liability and our responsibility. A single bad apple ruins the brand equity barrel - not least when the public bloodlust for all things EA is considered. Do we really want to be the makings of just another bad Netflix documentary?
I have worked in and alongside global programs via corporate philanthropy, international relations and political parties. I have helped to manage the oversight of programmes ranging from £50k to £50mil in valuation and have seen how paperwork and wraparound scrutiny mechanisms are supposed to increase exponentially with each added zero. This is the only way of safeguarding beneficiaries, of behaving ethically: failing to step up to this bureaucratic burden risks the poor programmatic design or fiscal mismanagement that does unthinkable harm to those who come to depend upon the hope and means to social mobility that these interventions purport to providing.
I would be delighted to leverage this experience and help EA in its current crisis and - it must be stated - was glad to see that when I did raise these initial concerns with EA leadership, they were responded to positively and I was immediately invited to meet with the governance team. That said, my concerns still stand so the offer does too :)
Let's get our house in order folks.
Hi Fods,
This is a great post with historical accounts from within EAs previous issues which somehow shows a trend that needs reviewing.. Thank you for sharing this.
Let me add that reviews of decision making policies and accounting controls plus declaration of conflicts of interest on a regular basis are deemed best practices in preventing large scale fraud or errors. When large sums of inflows and outflows of funds are involved any organization should strongly advocate systems that can help prevent future mishaps and not seek to correct them when such unfortunate events happens....
All the best,
Miguel
Related advocacy of better governance shortly before FTX collapsed: https://forum.effectivealtruism.org/posts/7urvvbJgPyrJoGXq4/fallibilism-bias-and-the-rule-of-law
Among other things, the article advocates the rule of law and criticizes a MIRI worker's claim that EA is a good and rational community, that does a better job than others, because "EAs have similar goals and so EA doesn't waste tons of energy on corruption, preventing corruption, negotiation, etc."