Hide table of contents

Introduction

This is a draft that I largely wrote back in Feb 2023, how the future of EA should look, after the implosion of FTX.

I. What scandals have happened and why?

1. 

There are several reasons that EA might be expected to do harm and have scandals:

a) Bad actors. Some EAs will take harmful actions with a callous disregard for others. (Some EAs have psychopathic tendencies, and it is worth noting that that utilitarian intuitions correlate with psychopathy.)

b) Naivete. Many EAs are not socially competent, or streetsmart, enough to be able to detect the bad actors. Rates of autism are high in EA, but there are also more general hypotheses. Social movements may in-general by susceptible to misbehaviour, due to young members having an elevated sense of importance, or being generally impressionable. See David Chapman on “geeks, mops and sociopaths” for other hypotheses.

c) Ideological aspects. Some ideas held by many EAs - whether right or wrong, and implied by EA philosophy or not - encourage risky behaviour. We could call these ideas risky beneficentrism (RB), and they include:

i. High risk appetite.

ii. Scope sensitivity

iii. Unilateralism

iv. Permission to violate societal norms. Violating or reshaping an inherited morality or other “received wisdom” for the greater good.

v. Other naive consequentialism. Disregard of other second-order effects

There are also hypotheses that mix or augment these categories: EAs might be more vulnerable to generally psychopathic behaviour due to that kind of decision-making appearing superficially similar to consequentialist decision-making

2. 

All of (a-c) featured in the FTX saga. SBF was psychopathic, and his behaviour included all five of these dimensions of risky beneficentrism. The FTX founders weren’t correctly following the values of the EA community, but much of what would be warning signs to others (gambling-adjacency, the Bahamas, lax governance), to us  just looked like someone pursuing a risky, scope-sensitive, convention-breaking, altruistic endeavour. And we EAs outside FTX, perhaps due to ambition and naivete, supported these activities.

3. 

Other EA scandals, similarly, often involve multiple of these elements:

  • [Person #1]: past sexual harassment issues, later reputation management including Wiki warring and misleading histories. (norm-violation, naive conseq.)
  • [Person #2]: sexual harassment (norm-violation? naive conseq?)
  • [Person #3] [Person #4] [Person #5]: three more instances of crypto crimes (scope sensitivity? Norm-violation, naive conseq.? naivete?)
  • Intentional Insights: aggressive PR campaigns (norm-violation, naive conseq., naivete?)
  • Leverage Research, including partial takeover of CEA (risk appetite, norm-violation, naive conseq, unilateralism, naivete)
  • (We’ve seen major examples of sexual misbehaviour and crypto crimes in the rationalist community too.)

EA documents have tried to discourage RB, but this now seems harder than we thought. Maybe promoting EA inevitably leads to significant amounts of harmful RB. 

4.

People have a variety of reasons to be less excited about growing EA:

EA contributed to a vast financial fraud, through its:

  • People. SBF was the best-known EA, and one of the earliest 1%. FTX’s leadership was mostly EAs. FTXFF was overwhelmingly run by EAs, including EA’s main leader, and another intellectual leader of EA. 
  • Resources. FTX had some EA staff and was funded by EA investors.
  • PR. SBF’s EA-oriented philosophy on giving, and purported frugality served as cover for his unethical nature.
  • Ideology. SBF apparently had an RB ideology, as a risk-neutral act-utilitarian, who argued a decade ago why stealing was not in-principle wrong, on Felicifia. In my view, his ideology, at least as he professed it, could best be understood as an extremist variant of EA.

Now, alongside the positive consequences of EA, such as billions of dollars donated, and minds changed about AI and other important topics, we must account for about $10B having been stolen.

People who work in policy now stand to have their reputation harmed by a connection to “EA”. This now means that EA movement growth can make policmakers more likely to be tied to EA, harming their career prospects.

From a more personal perspective, 200-800 EAs lost big on FTX: their jobs, savings, or sheer embarrassment (having recommended grants rescinded, etc.).

It may be more difficult, as there are new ways that promoting EA can go badly. For example, the conversation could go like this:

Booster: “have you heard about EA?”
Recruit: “that thing that donated $1B then FTX defrauded 10B? The fifth largest fraud of all time?”
Booster: “...”

II. More background considerations about EA’s current trajectory.

5.

Despite the above, EA is not about to disappear entirely. For any reasonable strategy that EA leaders pursue, some of the most valuable aspects of EA will persist:

  • EA values. Some people care about EA very deeply. Religions can withstand persecution by totalitarian governments, and some feel just about as strongly about EA. So people will continue to hold EA values.
  • EA orgs. There are >30 orgs doing different EA things, with different leaders, inside and outside academia, in many nations, and supported by multiple funders. Most will survive.
  • The EA network. Many have been friends and colleagues for a decade or more. This adversity won’t tear the social graph apart.

6.

There has been a wide range of responses to FTX-related events. One common response is that people are less excited about EA.

  • Good people have become disillusioned, or burned out and are leaving EA. Note also the discontinuation of Lightcone for partly related reasons.
  • Personally, I still value EA goals, EA work, and have EA friends, but my enthusiasm for EA as a movement is gone.
  • Some argue that the people I hear from are not representative. But I think they differ if anything, by being older, and more independent-minded, and more academically successful. And if outreach is still thriving, but this is just because new recruits are young and naive about EA’s failings, then this is not reassuring. Besides, the bigger selection effect is that we are not hearing from many, because they have left EA.

7.

Another kind of response is for people to get defensive. Some of us would take a bunker-type mindset, trying to fortify EA against criticism. One minor example is that when I wrote in my bio, for an EA organiser retreat, that I am interested in comparing community-building vs non-CB roles, and in discussing activities other than movement-building, a worried organiser reminded me that the purpose of the retreat is to promote community-building. Another example is that many told me that their biggest concern about criticism of EA is that it demotivates people. Others reflected more deeply on EA, and I think that was the more appropriate response.

8.

Some people changed their views on EA less than I would like, especially some grantmakers and leaders. (I hesitate to point it out, but such folks do derive a lot of influence from their roles within EA, making it harder to update.)

  • Some have suggested that there was little wrong with the style of politics pursued using FTX money, apart from the fraud.
  • Some asked: if students seem to still be interested, then why is there a problem? One asked “aren’t you a utilitarian? Should you therefore place low weight on these things?”. Whereas I think that our seat of the pants impression of the severity of events is more reliable than e.g. surveys from selected samples of event attendees, and that we should treat recent scandals seriously.

9.

I think that CEA has struggled in some ways to adapt to the new situation.

  • Vacuous annual update. CEA’s annual updates said that CEA had a great year, because metrics are up, while EA at-large had its worst year ever. In order to know the value of different kinds of growth, including even the sign, one must know the strategic landscape, which was not discussed. EVF and its board have generally said little about the FTX issue.
  • Governance. EVF failed to announce its new CEO for months, and for almost a year, the FTXFF team still made up 40% of its UK board. It is thought 10%-likely at time of writing to be subject to discipline from the charity commission.

III. Thoughts on strategy

So far, I have largely just described EA’s strategic position. I will now outline what I think we should now do.

10.

Promoting “EA” to promote interest in x-risk now makes less senseBut I don’t think pivoting the longtermist EA scene toward being an “x-risk” community solves the problem; it might even make it worse.

a) Typically, people I talk to are interested EA movement-building as a route to reducing x-risks. It is a priori surprising that we should need to detour through moral philosophy and global health, but they raise three points in favour: 

i. EA is an especially persuasive way to get people working on x-risk

ii. (AI) x-risk is too weird to directly persuade people about.

iii. People who work on existential risk for “the right reasons” (i.e. reasons related to EA), can be trusted more, to do a good job. 

Due to the reputational collapse of EA, (i) is less plausible. (ii) is much less true than it once was, thanks to Superintelligence, the Precipice, and growing AI capabilities. It might make some sense to want someone who can pursue your goals, as in (iii). But on the other hand, interest in EA is now a less reliable signal of someone’s epistemics, and their trustworthiness in general, pushing in the other direction. 

b) There is also somewhat less goodwill and  trust between longtermist and non-longtermist EA, than there was a decade ago.

c) Even before the events of FTX, we knew that there was low-hanging fruit in field-building that we should pick - now we just need to give it more of our attention. 

d) So, within a few years, I think most people doing EA work will be inclined to promote causes of interest - x-risk, global poverty, etc. in a direct fashion. 

e) This is not without its challenges. We will need to work out new cause-specific talent pipelines. And many of the difficulties with EA outreach will remain in the cause-specific communities. 

f) A central problem is that an AI safety community founded by EAs could - absent mitigating actions - be expected to have many of the same problems that afflicted EA. We could still expect that a subculture might form. If AI risk can literally cause human extinction, then it is not hard to justify the same kind of risky actions that RB can justify. Some illegal actions might even be more likely in an AI safety community, such as trying to harm AI researchers. So fundamentally, a switch to directly promoting AI/x-risk reduction doesn’t solve the problems discussed in this document.

g) But starting a newly branded “AI safety” community might present an opportunity to reset our culture and practices.

11.

Apart from pivoting to “x-risk”, what else could we do? Would it help if…

a) …EA changed its name? 

This would be deceptive, and wouldn’t solve the risky behaviour, or even the reputational issues in the long-term.

b) …we had a stronger community health team with a broad mandate for managing risks, rather than mostly social disputes and PR? 

Maybe, but CH already had a broad mandate on paper. Given EVF’s current situation, it might be a tall task. And if VCs and accountancies didn’t see FTX’s problems, then a beefed-up CH might not either.

Maybe a CH team could do this better independently of CEA.

Alternatively, risk management could be decentralised by instantiating a stronger norm against risky projects: If Alice thinks some project is good but Bob says it’s harmful, we trust Bob more than we did before.

c) ... we shape the ideology to steer clearer of RB & naive consequentialism?

Yes, this was already attempted, but we could do more. OTOH, it could be that beneficentrists are naturally (and intractably) include some psychopaths and other untrustworthy figures. If so, this could undermine the central strategy of recruiting beneficentrists to a community. But to the extent that people are committed to growing this beneficentrist community, such mitigation strategies are worth trying.

d) … we selected against people with dark triad traits, and in cases where we did recruit such people, we identified and distanced ourselves from them.

Absolutely

e) … EA was more professionalised, and less community-like?

Yes. People do risky things when they feel it’s them vs the world. And we trusted Sam too much because he was “one of us”. 

f) … we had better governance

Yes, there was little oversight of FTX, which enabled various problems.

g) … EAs otherwise become less bound to one another's reputations

It doesn’t obviate the harm from risky behaviour, but ideally yes, and this could also be achieved by being less community-like.

We could focus less on visible displays of do-gooding, to reduce the false impression that we are supposed to be a community of moral saints.

These are just my initial thoughts, obviously on all fronts, further investigation is needed.

12. 

What  should happen overall?

a) I think most EAs are not yet ready to let go of the existence of an EA movement. Nor is it clear that we could wind down effectively if we tried (without it getting co-opted). And things might look different in a year. So I don’t think it makes sense to try to wind down EA completely.

b) Still, it’s hard to see how tweaking EA can lead to a product that we should be excited about growing. Especially considering that we have the excellent option of just talking directly about the issues that matter to us, and doing field-building around those ideas - AI safety, Global Priorities Research, and so on. This would be a relatively clean slate, allowing us to do more (as outlined in 11), to discourage RB, and stop bad actors.

c) In this picture, EA would grow more slowly or shrink for a while, and maybe ultimately be overtaken by cause-specific communities.

Thanks to a dozen or so readers for their thoughtful comments and suggestions.

Comments57
Sorted by Click to highlight new comments since:

Still, it’s hard to see how tweaking EA can lead to a product that we and others be excited about growing. Especially considering that we have the excellent option of just talking directly about the issues that matter to us, and doing field-building around those ideas... This would be a relatively clean slate, allowing us to do more (as outlined in 11), to discourage RB, and stop bad actors.

Do you remember how animal rights was pre-EA? The first Animal Rights National Conference I went to, Ingrid Newkirk dedicated her keynote address to criticizing scope sensitivity, and arguing that animal rights activists should not focus on tactics which help more animals. And my understanding is that EA deserves a lot of the credit for removing and preventing bad actors in the animal rights space (e.g. by making funding conditional on organizations following certain HR practices).

It's useful to identify ways to improve EA, but we have to be honest that imaginary alternatives largely seem better because they are imaginary, and actual realistic alternatives also have lots of flaws.

(Of course, it's possible that those flawed alternatives are still better than EA, but figuring this out requires actually comparing EA to those alternatives. Some people have started to do this e.g. here, and I find that work valuable.)

And my understanding is that EA deserves a lot of the credit for removing and preventing bad actors in the animal rights space (e.g. by making funding conditional on organizations following certain HR practices).

Do you know of anywhere this is more documented or discussed? It seems a pretty relevant case to the concerns people have about EA itself being under-HR'd.

ACE's "organizational health" criterion is described here and they wrote a blog post about it here. tl;dr is that they have a checklist of various policies and also survey staff, then combine this into a rating on dimensions like "Harassment and discrimination policies".

As an example of it in action, see the 2022 review of Vegetarianos Hoy:

A few staff (1–3 individuals) report that they have experienced harassment or discrimination at their workplace during the last 12 months, and a few (1–3 individuals) report to have witnessed harassment or discrimination of others in that period. In particular, they report low recognition of others’ work and low salaries. All of the claimants reported that the situation was not handled appropriately...

Vegetarianos Hoy’s leadership team recognizes reported issues and reports that they have taken steps to resolve them. In particular, they report they are aware of alleged issues and have hired a Culture and Talent Analyst position and two new leadership positions.

I think OP also deserves a lot of the credit, but I am not aware of anything publicly written to describe what they have done.

Still, it’s hard to see how tweaking EA can lead to a product that we and others be excited about growing. 

 

It's not clear to me how far this is the case. 

  • Re. the EA community: evidence from our community survey, run with CEA, suggests a relatively limited reduction in morale post-FTX. 
  • Re. non-EA audiences, our work reported here and here (though still unpublished due to lack of capacity) suggest relatively low negative effects in the broader population (including among elite US students specifically).

I agree that:

  • Selection bias (from EAs with more negative reactions dropping out) could mean that the true effects are more negative. 
    • I agree that if we knew large numbers of people were leaving EA this would be another useful datapoint, though I've not seen much evidence of this myself. Formally surveying the community to see how many people know of leaving could be useful to adjudicate this.
    • We could also conduct a 'non-EA Survey' which tries to reach people who have dropped out of EA, or who would be in EA's target audience but who declined to join EA (most likely via referrals), which would be more systematic than anecdotal evidence. RP discussed doing with with researchers/community builders at another org, but haven't run this due to lack of capacity/lack of funding.
  • If many engaged EAs are dropping out but growth is continuing only because "new recruits are young and naive about EA’s failings," this is bad. 
    • That said, I see little reason to think this is the case.
    • In addition, EA's recent growth rates seem higher than I would expect if we were seeing considerable dropout. 

Especially considering that we have the excellent option of just talking directly about the issues that matter to us, and doing field-building around those ideas - AI safety, Global Priorities Research, etc. and so on. This would be a relatively clean slate, allowing us to do more (as outlined in 11), to discourage RB, and stop bad actors.

It's pretty unclear to me that we would expect these alternatives to do better. 

One major factor is that it's not clear that these particular ideas/fields are in a reputationally better position than EA. Longtermist work may have been equally or more burned by FTX than EA. AI safety and other existential risk work have their own reputational vulnerabilities. And newer ideas/fields like 'Global Priorities Research' could suffer from being seen as essentially a rebrand of EA, especially if they share many of the same key figures/funding sources/topics of concern, which (per your 11a) risks being seen as deceptive. Empirical work to assess these questions seems quite tractable and neglected.

Re. your 10f-g, I'm less sanguine that the effects of a 'reset' of our culture/practices would be net positive. It seems like it may be harder to maintain a good culture across multiple fragmented fields in general. Moreover, as suggested by Arden's point number 1 here, there are some reasons to think that basing work solely around a specific cause may engender a less good culture than EA, given EA's overt focus on promoting certain virtues.

Thanks for this. I found the uncited claims about EA's "reputational collapse" in the OP quite frustrating and appreciated this more data-driven response.

I agree, the most striking part of this article was that this core assumption had no numerical data to back it up. Only his own discussions with high level EAs.

"Due to the reputational collapse of EA"

High level EAs are more likely to have closer involvement with SBF/FTX and therefore more likely to have higher levels of reputational Loss than the average EA, or even the movement as a whole. I would confidently guess that the "200-800" EAs who lost big on FTX would skew heavily towards the top of the leadership structure.

The three studies cited here in the comment, and a few from community organisers provide evidence that yes EA had reputationally suffered, but hardly collapsed. Why did not at least mention those? Maybe because it was a February draft, but I would have thought revising to cite what data is available is a good idea?

The OP might be right that the situation is worse than appears on what research we have, but I would have thought that making arguments against the validity of the research would have been a good idea.

FYI, I’ve just released a post which offers significantly more empirical data on how FTX has impacted EA. FTX’s collapse seems to mark a clear and sizable deterioration across a variety of different EA metrics.

Leverage Research, including partial takeover of CEA

I am very shocked. What exactly happened? How could this happen? How could the CEA possibly let itself be infiltrated by a cult striving to take over the world? And how could an organization founded by academics fail to scrutinize Leverage's pseudo-scientific and manipulative use of concepts and techniques related to psychotherapy and rationality? Did CEA ever consult an independent psychological scientist or psychotherapy researcher to assess the ethicality of what Leverage was doing, the accuracy of their claims, or the quality of their "research"? Didn't it raise any red flags that the people inventing new methods of "psychotherapy" had no training in clinical psychology?

You can read more about CEA's relationship with Leverage Research on our mistakes page.

I'm not the best person to speak on this, but note that the people listed second and third on Leverage's teams page used to be CEA's CEO and Head of Events, Groups, and Grants.

I am one of the people who thinks that we have reacted too much to the FTX situation. I think as a community we sometimes suffer from a surfeit of agency and we should consider the degree to which we are also victims of SBF's fraud. We got used. It's like someone asking to borrow your car and then using it as the getaway car for a heist. Sure, you causally contributed, but your main sin was poor character judgement. And many, many people, even very sophisticated people, get taken in by charismatic financial con men.

I also think there's too much uncritical acceptance of what SBF said about his thoughts and motivations. That makes it look like FTX wouldn't have existed and the fraud wouldn't have happened without EA ideas... but that's what Sam wants you to think.

I think if SBF had never encountered EA, he would still have been working in finance and spotted the opportunity to cash in with a crypto exchange. And he would still have been put in situations where he could commit fraud to keep his enterprise alive. And he would still have done it. Perhaps RB provided cover for what he was thinking (subconsciously even), but plenty of financial fraud happens with much flimsier cover.

That is, I think P(FTX exists) is not much lower than P(FTX exists | SBF in EA), and similarly for P(FTX is successful) and P(SBF commits major fraud) (actually I'd find it plausible to think that the EA involvement might even have lowered the chances of fraud).

Overall this makes me mostly feel that we should be embarrassed about the situation and not really blameworthy. I think the following people could do with some reflection:

  • People who met, liked, and promoted SBF. "How could I have spotted that he was a con man?"
  • People who bought SBF's RB reasoning at face value. "How could I have noticed that this was a cover for bad behaviour?"
    • I include myself in this a bit. I remember reading the Tyler Cowen interview where he says he'd play double or nothing with the world on a coin flip and thinking "haha, how cute, he enjoys biting some bullets in decision theory, I'm sure this mostly just applies to his investing strategy". 

That's about it. I think the people who took money from the FTXFF are pretty blameless and it's weird to expect them to have done massive due diligence that others didn't manage.

There are various reasons to believe that SBF's presence in EA increased the chance that FTX would happen and thrive:

  • Only ~10k/10B people are in EA, while they represent ~1/10 of history's worst frauds, giving a risk ratio of about 10^5:1, or 10^7:1, if you focus on an early cohort of EAs. This should give an immediate suspicion that P(FTX thrives | SBF in EA)/P(FTX thrives | SBF not in EA) is very large indeed.
  • Sam decided to do ETG due to conversations with EA leaders. 
  • EA gave Alameda a large majority of its funding and talent.
  • EA gave FTX at least 1-2 of the other leaders of the company.
  • ETG was a big part of Sam's public image and source of his reputation.

Only ~10k/10B people are in EA, while they represent ~1/10 of history's worst frauds, giving a risk ratio of about 10^5:1, or 10^7:1, if you focus on an early cohort of EAs.

This seems wildly off to me - I think the strength of the conclusion here should make you doubt the reasoning!

I think that the scale of the fraud seems like a random variable uncorrelated with our behaviour as a community. It seems to me like the relevant outcome is "producing someone able and willing to run a company-level fraud"; given that, whether or not it's a big one or a small one seems like it just adds (an enormous amount of) noise. 

How many people-able-and-willing-to-run-a-company-level-fraud does the world produce? I'm not sure, but I would say it has to be at least a dozen per year in finance alone, and more in crypto. So far EA has got 1. Is that above the base rate? Hard to say, especially if you control for the community's demographics (socioeconomic class, education, etc.).

I estimated that 1-2% of YCombinator-backed companies commit substantial fraud. It seems hard to make the case that the rate in EA is 10^7x this.

This is a very interesting take, and very well expressed. You could well be right that the narrative that 'we got used' is the most correct simple summary for EAs/EA. And I definitely agree that it is an under-rated narrative. There could even be psychological reasons for that (EAs being more prone to guilt than to embarassment?).

I note that even if P(FTX exists | EA exists) were quite a bit higher than P(FTX exists | ~EA exists), that could be compatible with your suggested narrative of EAs being primarily marks/victims. To reuse your example, if you were the only person the perpetrator of the heist could con into lending their car to act as a getaway vehicle, then that would make P(Heist happens | Your actions) quite a bit higher than P(Heist happens | You acting differently), but you would still be primarily a mark or (minor) victim of the crime, rather than primarily one of the responsible parties for it.

I agree the primary role of EAs here was as victims, and that presumably only a couple of EAs intentionally conspired with Sam. But I wouldn't write it off as just social naivete; I think there was also some negligence in how we boosted him, e.g.:

  • Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn't disclose this.
  • Some EAs knew that Sam and FTX weren't behaving frugally, which would undermine his public image, but also didn't disclose.
  • Despite warnings from early-Alameda people, FTX received financial and other support from EAs.
  • EAs granted money from FTX's foundation before it had been firewalled in a foundation bank account.
  • EA leaders invited him to important meetings, IIRC, even once he was under govt investigation.

It might be that naive consequentialist thinking, a strand of EA's cultural DNA, played a role here, too. In general I think it would be fruitful to think about ways that our ambitions, attitudes, or practices might have made us negligent, not just ways that we might have been too trusting.

  • Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn't disclose this.
  • Some EAs knew that Sam and FTX weren't behaving frugally, which would undermine his public image, but also didn't disclose.

FWIW, these examples feel hindsight-bias-y to me. They have the flavour of "we now know this information was significant, so of course at the time people should have known this and done something about it". If I put myself in the shoes of the "some EAs" in these examples, it's not clear to me that I would have acted differently and it's not clear what norm would suggest different action.

Suppose you are a random EA. Maybe you run an EA org. You have met Sam a few times, he seems fine. You hear that he is dating Caroline. You go "oh, that's disappointing, probably bad for the organization, but I guess we'll have to see what happens" and get on with your life.

It seems to me that you're suggesting this was negligent, but I'm not sure what norm we would like to enforce here. Always publish (on the forum?) negative information about people you are at all associated with, even if it seems like it might not matter? 

The case doesn't seem much stronger to me even if you're, say, on the FTX Foundation board. You hear something that sounds potentially bad, maybe you investigate a little, but it seems that you want a norm that there should be some kind of big public disclosure, and I'm not sure that really is something we could endorse in general.

I think the best thing here would have been for much of the information to be shared casually, without needing to justify itself as important. People gossip about relationships and their terrible bosses all the time I suspect that if that had happened, people would have gathered more clues earlier, enough to make a difference on the margin.

I think it's worth emphasizing that if "naive consequentialism" just means sometimes thinking the ends  justify the means in a particular case, and being wrong about it, then that extends into the history of scandals far far beyond groups that have ever been motivated by explicitly utilitarian technical moral theory. 

Oh, I definitely agree that the guilt narrative has some truth to it too, and that the final position must be some mix of the two, with somewhere between a 10/90 and 90/10 split. But I'd definitely been neglecting the 'we got used' narrative, and had assumed others were too (though aprilsun's comment suggests I might be incorrect about that).

I'd add that for different questions related to the future of EA, the different narratives change their mix. For example, the 'we got used' narrative is at its most relevant if asking about 'all EAs except Sam'. But if asking about whether it is good to grow EA, it is relevant that we may get more Sams. And if asking 'how much good or bad do people who associate with EA do?' the 'guilt' narrative increases in importance. 

I do think it's an interesting question whether EA is prone to generate Sams at higher than the base rate. I think it's pretty hard to tell from a single case, though.

As you've linked me to this comment elsewhere, I'll respond.

  • I'm not sure why EAs who knew about Caroline and Sam dating would have felt the need to 'disclose' this? (Unless they already suspected the fraud but I don't think I've seen anyone claim that anyone did?)
  • Sam and FTX seem about as frugal as I'd expect for an insanely profitable business and an intention to give away the vast majority of tens of billions of dollars, too frugal if anything (although I actually think my disagreeing with you here is a point in favour of your broader argument -- the more Sam splurged on luxuries, the less it looks like he was motivated by any kind of altruism)
  • What financial and other support did FTX receive after the early Alameda split by people who were 'warned'? (There may well be a substantial amount, I just don't think I've heard of any.) I'll note, though, that it's easy to see as 'warnings' now what at the time could have just looked like a messy divorce.
  • Admittedly I don't know much about how these things work, but yet again this looks like hindsight bias to me. Would this have been a priority for you if you had no idea what was coming? If the firewalling was delayed for whatever reason, would you have refused to award grants until it had been? Perhaps. But I don't think it's obvious. Also wouldn't they still be vulnerable to clawbacks in any case?
  • Again, I just don't think I know anything about this. In fact I don't think I knew Sam was under government investigation before November. I'd be curious if you had details to share, especially regarding how serious the allegations were at the time and which meetings he was invited to.

1 - it's because Sam was publicly claiming that the trading platform and the traders were completely firewalled from one another, and had no special info, as would normally (e.g. in the US) be legally required to make trading fair, but which is impossible if the CEOs are dating

2 - I'm not objecting the spending. It was clear at the time that he was promoting an image of frugality that wasn't accurate. One example here, but there are many more.

3 - A lot of different Alameda people warned some people at the time of the split. For a variety of reasons, I believe that those who were more involved would have been warned commensurately more than I was (someone who was barely involved).

4 - Perhaps, but it is negligent not to follow this rule, when you're donating money from an offshore crypto exchange.

5 - I'm just going off recollection, but I believe he was under serious-sounding US govt investigation.

1 - Oh I see. So who knew that Sam and Caroline continued to date while claiming that FTX and Alameda were completely separate?

2 - You link to a video of a non-EA saying that Sam drives a corolla and also has a shot of his very expensive-looking apartment...what about this is misleading or inaccurate? What did you expect the EAs you have in mind to 'disclose' - that FTX itself wasn't frugal? Was anyone claiming it was? Would anyone expect it to have been? Could you share some of your many actual examples?

3 - (I don't think you've addressed anything I said in this section, so perhaps we should just leave it there.)

4 - Perhaps, but as I indicated, it looks like it wouldn't have made much difference anyway.

5 - (Okay, it looks like we should just leave this one there too.)

I think this is a very good summary.

To reuse your example, if you were the only person the perpetrator of the heist could con into lending their car to act as a getaway vehicle, then that would make P(Heist happens | Your actions) quite a bit higher than P(Heist happens | You acting differently), but you would still be primarily a mark or (minor) victim of the crime

Yes, this is a good point. I notice that I don't in fact feel very moved by arguments that P(FTX exists | EA exists) is higher, I think for this reason. So perhaps I shouldn't have brought that argument up, since I don't think it's the crux (although I do think it's true, it's just over-determining the conclusion).

Anecdotally, among the EAs I've spoken to IRL about all this and among the non-EAs I've spoken to about it, 'EA got used' is by far the more common narrative. I think the mood on this forum is quite different and I thought of the OP as the EA most committed to this 'it's all EA's fault' narrative even before he made this post. So I worry that his post paints a very skewed picture (also obviously because of the research others have mentioned in the comments).

See here. Among people who know EA as well as I do, many - perhaps 25% - are about as pessimistic as me, and some of the remainder have conflicts of interest, or have left.

Interesting. I think I know EA as well as you do and know many EAs who know EA as well as you do and as I said, you're the person most committed to this narrative that I could think of even before you made this post (no doubt there are others more pessimistic I haven't come across, but I'd be very surprised if it was 25%+). I also can't think of any who have left, but perhaps our respective circles are relevantly skewed in some way (and I'm very curious about who has!).

Point taken, though, that many of us have 'conflicts of interest,' if by that you mean 'it would be better for us if EA were completely innocent and thriving' (although as others have pointed out, we are an unusually guilt-prone community).

I mostly agree that people seem to have overreacted and castigated themselves about SBF-FTX, but also feel the right amount of reaction should be non-trivial. We aren't just talking about SBF, as the whole affair included other insiders who were arguably as 'true believers' in EA as it is reasonable to expect (like Caroline Ellison) and SBF-FTX becoming poster-children of the movement at a very high level. But I think you are mostly right: one can't expect omniscience and a level of character-detection amongst EAs when among the fooled were much more cynical, savvy and skeptic professionals in finance.

For what it's worth, I feel some EA values might have fueled some of Sam's bad praxis, but weren't the first mover. From what I've read, he absorbed (naive?) utilitarianism and a high-risks stake from the home. As for the counterfactual of him having ending up where he has without any involvement with EA... I just don't know. the story that is usually told is that his intent was working in charity NGOs before Will McAskill steered him towards an 'earning to give' path. Perhaps he would have gone into finance anyway after some time. It's very difficult to gauge intentions and mental states- I have never been a fan of Sam's (I discovered his existence, along with that of EA after and because of the FTX affair), but I can still assume that, if it comes to 'intent', his thoughts were probably more in a naive utilitarian, 'rules are for the sheep, I am smart enough to take dangerous bets and do some amoral stuff towards creating the greater good' frame than 'let me get rich by a massive scam and fleece the suckers'. Power and vanity would probably reinforce these as well.

We aren't just talking about SBF, as the whole affair included other insiders who were arguably as 'true believers' in EA as it is reasonable to expect (like Caroline Ellison)

Are there actually any other examples beyond SBF and his on-and-off romantic partner? I'd be a lot more concerned if there were several EAs independently plotting fraud who then joined forces, but that doesn't seem to be the case.

[anonymous]11
2
1

Nishad Singh pleaded guilty to several counts of conspiracy charges early this year. 

Nishad Singh, the former director of engineering at FTX, pleaded guilty to six conspiracy charges, including conspiracy to commit wire fraud, conspiracy to commit money laundering and conspiracy to violate federal campaign finances laws.

Singh is the third top executive and close confidante of FTX founder Sam Bankman-Fried to plead guilty and cooperate with prosecutors. Gary Wang, co-founder of FTX, and Caroline Ellison, the former head of FTX’s sister hedge fund Alameda Research, both pleaded guilty last year and are cooperating against Bankman-Fried.

Source

Nishad is reported to have been very into EA. 

Edit: Nishad knew by mid-2022 that Alameda was using FTX customer funds:

"I am unbelievably sorry for my role in all of this," Singh said, adding that he knew by mid-2022 that Bankman-Fried's hedge fund, Alameda Research, was borrowing FTX customer funds, and customers were not aware. Singh said that he would forfeit proceeds from the scheme. (Reuters)

I think that it has been said that among the leadership, Nishad Singh was pretty close to EA too. Further down the line, it is commonly said that Alameda especially attracted a lot of EA people as it was part of its appeal from the beginning. Needless to say, though, these people would have been completely in the dark about what was happening until they were told, in the very end.

What's RB?

I think it makes sense to put more effort into growing cause areas separately than it did before, but it doesn't mean completely winding down EA movement-building especially since I haven't heard of massive challenges with recruiting new members from university movement builders.

Some ideas held by many EAs - whether right or wrong, and implied by EA philosophy or not - encourage risky behaviour. We could call these ideas risky beneficentrism (RB), and they include:

i. High risk appetite.

ii. Scope sensitivity

iii. Unilateralism

iv. Permission to violate societal norms. Violating or reshaping an inherited morality or other “received wisdom” for the greater good.

iv. Other naive consequentialism. Disregard of other second-order effects

Thanks, I missed that!

So I do have some sympathy with various points you raise Ryan, but I think I have to push back on what I think the central claim of this is.

Due to the reputational collapse of EA

I think this is a key crux. I think EAs reputation has been more mixed, and there definitely seems to be more negative pusback, but 'reputational collapse'? Unless that was meant to be more of a flourish, I think David Moss points some pretty compelling evidence that this isn't true at the highest level.

I think the piece is great when it's your reflections on EA post FTX, but some of the larger interventions you suggest (e.g. winding down the EA movement) aren't justified if the 'reputational collapse' point isn't true.

In fact, on that last:

my enthusiasm for EA as a movement is gone

I'm sorry you feel that way. I'm a bit confused about the distinction, unless by 'EA Movement' you mean 'EA Community'? Like I don't see why the FTX debacle means that people should stop donating to GWWC? To me, GWWC/GiveWell/Animal Charity Evaluators/ARC Evals etc are all part of 'The EA Movement' and I don't think I feel less enthusiasm for them, or think they should be shut down, so that makes me think I don't quite grok what you mean by 'EA as a movement' here.

A lot of the comments seem fixated on, and wanting to object to the idea of "reputational collapse" in a way that I find hard to relate to. This wasn't a particularly load-bearing part of my argument, it was only used to argue that the idea that EA is a particularly promising way to get people interested in x-risk has become less plausible. Which was only one of three reasons not to promote EA in order to promote x-risk. Which was only one of many strategic suggestions.

That said, I find it hard not to notice that the reputation of, and enthusiasm for EA has changed, to a degree that must affect recruitment via EA to AI safety. If you're surrounded by EAs, it feels obvious. Trajan had a funereal atmosphere for weeks. Some were depressed for months. In news articles and on the forum, there was a cascade of PR disasters took much airtime from Q4 2022 to Q1 2023. There's been nothing like it in my 15 years around this community. The polling would have to have been pretty extraordinary to convince me that somehow I've misperceived what is really a pretty clear social reality.

The polling had some interesting findings, but not necessarily in a good way. The widely touted figure was that people's recalled satisfaction dropped "only" 0.5 on a ten-point scale. But most people rate their satisfaction around 7 most of the time, so this looks like an effect size of Cohen's d=0.4 or so. And this is in the more enthusiastic sample who were willing to keep answering the EA survey even after these disasters. Scanning over the next few questions, you then see that 55%+ of respondants now have some form of concerns about the EA community's meta organisations, and likewise the community and its norms - much more than the 25% who had some concerns with the philosophy. Moreover, 39% agree in some way that they want to see the community look very different, and the same number say they are less likely to associate with EA. And 31% substantially lost trust in EA public figures or leadership. Those who were more engaged were in most ways more concerned, which would fit with the selection effect hypothesis (those of the less engaged EAs who became disaffected simply left, and didn't respond to the survey). I find it really hard to understand those who would want to regard these results as "pretty compelling evidence" that EA has not suffered a major hit that would affect its viability as a way of recruiting to AIS.

The polling of people outside the EA community is least convincing to me for a variety of reasons. Firstly, these people currently know least, and many of them will hear more in the future, such as when SBF's trial happens, the Michael Lewis book is released, or some of the nine film/video adaptations come. Importantly, if any of them become interested in EA, they are likely to hear about such things, and come to reflect the first cohort to a greater extent. But already, ~5% of them mention FTX in the interview, and ~1% of them  mention it in the context of the meaning of EA or how they heard about it. In other words, the "scenario where promoting EA could go badly" is something that a community-builder would likely experience at least once. And those who know about FTX have a much more negative view (d=1.5 with high uncertainty). So although this is the more positive of the two batches of polling, I wouldn't necessarily gloss it as "there's no big problem".

I'm sorry you feel that way. I'm a bit confused about the distinction, unless by 'EA Movement' you mean 'EA Community'

I mean I've lost enthusiasm for the community/movement element, at least on a gut level. I've no objection to people donating, and living a consequentialist-leaning philosophy - rather I'm in favour of that so long as they're applying the ideas carefully.

Firstly, these people currently know least, and many of them will hear more in the future, such as when SBF's trial happens, the Michael Lewis book is released, or some of the nine film/video adaptations come.

I think this an underappreciated point. If you look at google trends for Theranos, the public interest didn't really get going until a few years after the fraud was exposed, when popular podcasts, documentaries and tv series started dropping. I think the FTX story is about as juicy as that one. I could easily see a film about FTX becoming the next "the social network" or "the big short" (both directors are mentioned in your article), in which case it would instantly become the primary reference point for public perceptions of effective altruism. 

I think there was perhaps some miscommunication around your use and my interpretation of "collapse". To me it implies that something is at an unrecoverable stage, like a "collapsed" building or support for a presidential candidate "collapsing" in a primary race. In your pinned quick take you posit that Effective Altruism as a brand may be damaged to an unrecoverable extent which makes me feel this is the right reading of your post, or at least it was a justified interpretation at least.

***

I actually agree with a lot of your claims in your reply. For example, I think that the last 12 months has been the worst the movement has ever faced. I think many inside the EA movement have lost a lot of faith and trust in both the movement as a whole and its leadership. I don't think anything I said in my comment would support the view that "there's no big problem", I guess I don't things have to be so binary as "EA's reputation has collapsed" or "it's not that bad we're back to normal".

Looking into the future, I think you do raise a very good point about risks from media that reflects on the FTX collapse and the role EA played. I think that's something that the movement does need to prepare for. I don't have specific suggestions myself, apart from believing that we need a more positive strategic direction to counter mistaken criticisms of us instead of leaving them unchallenged, and accepting reforms where they have merit, but I think that's a whole other discussion.

***

On the EA movement itself, I guess I find it harder to divorce it from the ideas and values of the movement. Here I think Scott's already said what I believe in a better way:

"a lot of media is predicting the death of EA, or a major blow to EA, or something in that category. Not going to happen. The media isn’t good at understanding people who do things for reasons other than PR. But most EAs really believe. Like, really believe. If every single other effective altruist in the world were completely discredited, I would just shrug and do effective altruism on my own. If they instituted the death penalty for effective altruism, I would do it under cover of night using ZCash. And I’m nowhere near the most committed effective altruist; honestly I’m probably below average. “Saint gets eaten by lions in Colosseum, can early Christianity possibly survive this setback?” Update your model or prepare to be constantly surprised."

Ideas have importance all on their own. The ideas that make up the philosophy of "Effective Altruism" exist, and cannot be destroyed. People would believe them, want to co-ordinate on it. Then they'd want to organise to help make their own ideas more efficient and boom, we're just back to an EA movement all over again.

I guess where we most disagree given this would would be the tone/implications of your section 12. 

a) I think most EAs are not yet ready to let go of the existence of an EA movement. 

I don't really know, given above, that this is an option.

b) Still, it’s hard to see how tweaking EA can lead to a product that we should be excited about growing. 

I'm still excited about EA? I don't know how broad the 'we' is meant to be. I still want concern about reducing the suffering on non-human animals to grow, I still want humanity to expand its moral circle beyond the parochial, I still want us to find the actions individually and collectively that will lead to humanity flourishing. Apologies if I'm misinterpreting, but this sentance really seems to come out of left-field from me given the rest of your post.

c) In this picture, EA would grow more slowly or shrink for a while, and maybe ultimately be overtaken by cause-specific communities.

I think again, given the ideas of EA exist, these cause-specific communities would find themselves connected again over time.

Scott's already said what I believe

Yes, I had this exact quote in mind when I said in Sect 5 that "Religions can withstand persecution by totalitarian governments, and some feel just about as strongly about EA."

People would believe them, want to co-ordinate on it. Then they'd want to organise to help make their own ideas more efficient and boom, we're just back to an EA movement all over again.

One of my main theses is supposed to be that people can and should coordinate their activities without acting like a movement.

I still want concern about reducing the suffering on non-human animals to grow, I still want humanity to expand its moral circle beyond the parochial, I still want us to find the actions individually and collectively that will lead to humanity flourishing. Apologies if I'm misinterpreting, but this sentance really seems to come out of left-field from me given the rest of your post.

This feels like the same misunderstanding. Spreading EA ideas and values seems fine and good to me. It's the collectivism, branding, identity-based reasoning, and other "movement-like" characteristics that concern me.

I think again, given the ideas of EA exist, these cause-specific communities would find themselves connected

This seems like black and white thinking to me. Of course these people will connect over their shared interests in consequentialism, RCTs, and so on. But this is different from branding and recruiting together, regulating this area as one community, hosting student chapters, etc.

Thanks for explaining your viewpoints Ryan. I think I have a better understanding, but I'm still not sure I grok it intuitively. Let me try to repeat what I think is your view here (with the help of looking at some of your other quick takes)

note for readers, this is my understanding of Ryan's thoughts, not what he's said

 1 > The EA movement was directly/significantly causally responsible for the FTX disaster, despite being at a small scale (e.g. "there are only ~10k effective altruists")

2 > We should believe that without reform, similar catastrophes will contain to happen as the movement grows, which would lead to many more such FTXs catastrophes (i.e. if we get to a movement of 1 million EAs in the current movement, we should expect ~100 FTX-scale disasters)

3 > This outcome is morally unacceptable, so the EA movement shouldn't grow in its current form

4 > An alternative way to grow would be to continue to grow as a set of different, interconnected movements focused around direct action (e.g. The xRisk community, The animal welfare community, the Global Development community, The Longtermist community etc...)

5 > This would allow EA values to spread without the harms that we see occurring with EA as a movement

I'm think I follow along. I'm not sure about the extrapolation of FTX (would it scale linearly or logarithmically? Does it actually make any sense to extrapolate as if EA will continue the same way at all?) But that aside I think my main disagreement is to think why a set of separate fields/communities that co-ordinate would be better at avoiding the failure modes you see in EA than the current one. I feel like "collectivism, branding, identity-based reasoning, and other "movement-like" characteristics" are going to occur whenever humans organise themselves into groups.

I think perhaps an underlying disagreement we have is about the power of ideas. I just don't think you can cleanly separate the EA movement from EA values. Ideas are powerful things which have logical and empirical consequences. The EA movement has grown so much so quickly, in my view, because its ideas and values are true[1] and convincing. That causes movements and not the other way around. I guess I'm finding it difficult to picture what a movement-less EA would look like?

As an intuition pump, it'd be like a reformer saying Christians so just go to church on Sunday and listen to sermon, follow the commandments, read the bible, tithe, and do good works, but not bother with all of the Father/Son/Holy Ghost stuff. But that belief is the reason why they're doing the former. In a world where that was attempted to be removed, I think people would either stop doing the activities or reinvent them.

  1. ^

    or true-enough, or seem true enough. Not claiming EA has anywhere near 'ultimate truth' on any issue

Roughly yes, with some differences:

  • I think the disasters would scale sublinearly
  • I'm also worried about Leverage and various other cults and disasters, not just FTX.
  • I wouldn't think of the separate communities as "movements" per se. Rather, each cause area would have a professional network of nonprofits and companies.

Basically, why do mid-sized companies usually not spawn cults and socially harm their members like movements like EA and the animal welfare community sometimes do? I think it's because movements by their nature try to motivate members towards their goals, using social pressures. This attracts young idealists, some of whom will be impressionable. People will try radical stuff like traveling to locations where they're unsupported, going on intensive retreats, circling, drugs, polyamory, etc. These things benefit some people in some situations, but in they also can put people in vulnerable situations. My hypothesis is that predators detect this vulnerability and then start even crazier and more cultish projects, arguably including Leverage and FTX, under the guise of advancing the movement's goals.

Companies rarely put junior staff in such vulnerable positions. People generally know not to sleep with subordinates, and better manage conflicts of interest. They don't usually give staff a pass for misbehaviour due to being value-aligned.

We don't need to lose our goals, or our social network, but we could strip away a lot of risk-increasing behaviour that "movements" do, and take on some risk-reducing "professionalising" measures that's more typical of companies..

I agree that ideas are powerful things, and that people will continue to want to follow those ideas to their conclusions, in collaboration with others. But I'm suggesting to be faithful to those ideas might be to shape up a little bit and practice them somewhat differently. For the case of Christianity, it's not like telling Christians to disavow the holy Trinity. It's more like noticing abuse in a branch of Christianity, and thinking "we've got to do some things differently". Except that EA is smaller and thousands of years younger, so can be more ambitious in the ways we try to reform.

JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”?

Ryan, do you have a sense of what that would concretely look like?

If we look at other professionals, for example, engineers have in common some key ideas, values, and broad goals (like ‘build things that work’). Senior engineers recruit young engineers and go to professional conferences to advance their engineering skills and ideas. Some engineers work in policy or politics, but they clearly aren’t a political movement. They don’t assume engineering is a complete ethos for all major life decisions, and they don’t assume that other engineers are trustworthy just because they are engineers.

I share your appreciation for EA ideas and think they’ll have longevity. I don’t know that there is a way to push back against the pitfallls of being a social movement instead of just being a collection of professionals. But I agree with Ryan that if there were a way to just be a group of skilled colleagues rather than “brethren”, it would be better. Social movements have the pitfalls of religions, tribes, and cults that most professions do not and fall prey to more demagogues as a result.

JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”?

I guess I still don't have a clear idea of what Ryan's 'network of networks' approach would look like without the 'movement' aspect broadly defined. How definitely would that be practically from current EA but with more decentralisation of money and power, and more professional norms?

But would this be a set of rigid internal norms that prevent people from the philanthropy space connecting with those in specific cause areas? Are we going to split AI technical and governance fields strictly? Is nobody meant to notice the common philosophical ideas which underline the similar approaches to all these cause areas? It's especially the latter I'm having trouble getting my head around.

Some engineers work in policy or politics, but they clearly aren’t a political movement. They don’t assume engineering is a complete ethos for all major life decisions, and they don’t assume that other engineers are trustworthy just because they are engineers.

I don't think that 'field of engineering' is the right level of analogy here. I think the best analogies for EA are other movements, like 'Environmentalism' or 'Feminism' or 'The Enlightenment'.

Social movements have the pitfalls of religions, tribes, and cults that most professions do not and fall prey to more demagogues as a result.

Social movements have had a lot of consequences in the human history, some of them very positive and some very negative. It seems to me that you and Ryan think that there's a way to structure EA so that we can cleanly excise the negative parts of a movement and keep the positive parts without being a movement, and I'm not sure that's really possible or even a coherent idea.

***

[to @RyanCarey I think you updated your other comment as I was thinking of my response, so folding in my thoughts on that here]

We don't need to lose our goals, or our social network, but we could strip away a lot of risk-increasing behaviour that "movements" do, and take on some risk-reducing "professionalising" measures that's more typical of companies.

I'm completely with you here, but to me this is something that ends up miles away from 'winding down EA', or EA being 'not a movement'.

But I'm suggesting to be faithful to those ideas might be to shape up a little bit and practice them somewhat differently. For the case of Christianity, it's not like telling Christians to disavow the holy Trinity. It's more like noticing abuse in a branch of Christianity, and thinking "we've got to do some things differently".

I think abuse might be a bit strong as an analogy but directionally I think this is correct, and I'd agree we need to do things differently. But in this analogy I don't think the answer is end 'Christianity' as a movement and set up an overlapping network of tithing, volunteering, Sunday schools etc, which is what I take you to be suggesting. I feel like we're closer to agreement here, but on reflection the details of your plan here don't sum up to 'end EA as a movement' at all.

this is something that ends up miles away from 'winding down EA', or EA being 'not a movement'.

To be clear, winding down EA is something I was arguing we shouldn't be doing.

I feel like we're closer to agreement here, but on reflection the details of your plan here don't sum up to 'end EA as a movement' at all.

At a certain point it becomes semantic, but I guess readers can decide, when you put together:

whether or not it counts as changing from being a "movement" to something else.

Fair.

Having run through the analogy, EA becoming more like an academic field or a profession rather than a movement seems very improbable.

I agree that “try to reduce abuses common within the church” seems a better analogy.

JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”? Ryan, do you have a sense of what that would concretely look like?

Well I'm not sure it makes sense to try to fit all EAs into one professional community that is labelled as such, since we often have quite different jobs and work in quite different fields. My model would be a patchwork of overlapping fields, and a professional network that often extends between them.

It could make sense for there to be a community focused on "effective philanthropy", which would include OpenPhil, Longview, philanthropists, and grant evaluators. That would be as close to "impact analysis" as you would get, in my proposal.

There would be an effective policymaking community too.

And then a bevy of cause-specific research communities: evidence-based policy, AI safety research, AI governance research, global priorities research, in vitro meat, global catastrophic biorisk research, global catastrophic risk analysis, global health and development, and so on.

Lab heads and organisation leaders in these research communities would still know that they ought to apply to the "effective philanthropy" orgs to fund their activities. And they would still give talks at universities to try to attract top talent. But there wouldn't be a common brand or cultural identity, and we would frown upon the risk-increasing factors that come from the social movement aspect.

The "SBF was psychopathic" point is interesting. I wonder if it would be worthwhile to think about "hardening the community against psychopaths" from first principles, without much focus on the FTX situation in particular. Like, if psychopaths are people who try to work any system to their advantage, then a system focused on preventing FTX could still be a system that psychopaths can work to their advantage in some other way.

we had a stronger community health team with a broad mandate for managing risks, rather than mostly social disputes and PR? Maybe, but CH already had a broad mandate on paper. Given EVF’s current situation, it might be a tall task. And if VCs and accountancies didn’t see FTX’s problems, then a beefed-up CH might not either. Maybe a CH team could do this better independently of CEA

(Context - I was the interim head of the Community Health team for most of this year)


For what it’s worth, as as a team we've been thinking along similar lines (and having similar concerns) about how we could best use the team’s capacity to address the most important problems, while being realistic about our areas of strength. We’re aiming to get to a good balance. 

Apart from pivoting to “x-risk”, what else could we do?


Cultivate approaches to heal psychological wounds and get people above baseline on ability to coordinate and see clearly. 

CFAR was in the right direction goalwise (though its approach was obviously lacking). EA needs more efforts in that direction. 

“we shape the ideology to steer clearer of RB & naive consequentialism?”

I’m strongly in favour of this as something for CEA to aim for via the content of EA intro fellowships.

Specifically, EA should actively condemn law-breaking in the pursuit of doing good as a general principle and accept that EA isn’t just applied consequentialism and does accept broad deontological principles like “don’t break the law”, “don’t lie” etc.

I disagreed with this because good, sophisticated consequentialists should follow those rules on consequentialist grounds.

I think this is impossible. We all can easily point situations where breaking the law is a moral obligation; this is a true for Christians in Rome as for utilitarians.

What is needed is a clear appreciation of what is Law for, and why it has a very special value as social coordination device even if there is room for improvement in its material content. This is an excellent piece on that:

https://www.utilitarianism.com/utilitarianism-justice.pdf

Don't be casual about sex and law breaking is a good piece of advice :-) 

Curated and popular this week
Relevant opportunities