Edit: If you are landing here from the EA Forum Digest, note that this piece is not about Manifest and I don't want it it to be framed as being about Manifest.

Recently, I've noticed a growing tendency within EA to dissociate from Rationality. Good Ventures have stopped funding efforts connected with the rationality community and rationality, and there are increasing calls for EAs to distance themselves. 

This trend concerns me, and I believe it's good to make a distinction when considering this split.

We need to differentiate between 'capital R' Rationality and 'small r' rationality. By 'capital R' Rationality, I mean the actual Rationalist community, centered around Berkeley: A package deal that includes ideas about self-correcting lenses and systematized winning, but also extensive jargon, cultural norms like polyamory, a high-decoupling culture, and familiarity with specific memes (ranging from 'Death with Dignity' to 'came in fluffer').

On the other hand, 'small r' rationality is a more general concept. It encompasses the idea of using reason and evidence to form conclusions, scout mindset, and empiricism. It also includes a quest to avoid getting stuck with beliefs resistant to evidence, techniques for reflecting on and improving mental processes, and, yes, many of the core ideas of Rationality, like understanding Bayesian reasoning.

If people want to distance themselves, it's crucial to be clear about what they're distancing from. I understand why some might want to separate from aspects of the Rationalist community – perhaps they dislike the discourse norms, worry about negative media coverage, or disagree with prevalent community views. 

However, distancing yourself from 'small r' rationality is far more radical and likely less considered. It's similar to rejecting core EA ideas like scope sensitivity or cause prioritization just because one dislikes certain manifestations of the EA community (e.g., SBF, jargon, hero worship).

Effective altruism is fundamentally based on pursuing good deeds through evidence, reason, and clear thinking - in fact when early effective altruists were looking for a name, one of the top contenders was rational altruism. Dissecting the aspiration to think clearly would in my view remove something crucial.

Historically, the EA community inherited a lot of epistemic aspects from Rationality[1] – including discourse norms, emphasis on updating on evidence, and a spectrum of thinkers who don't hold either identity closely, but can be associated with both EA and rationality. [2]

Here is the crux: if the zeitgeist pulls effective altruists away from Rationality, they should invest more into rationality, not less. As it is critical for effective altruism to cultivate reason, someone will need to work on it. If people in some way connected to Rationality are not who EAs will mostly talk to, someone else will need to pick up the baton. 

  1. ^

    Clare Zabel in 2022 expressed similar worry

    Right now, I think the EA community is growing much faster than the rationalist community, even though a lot of the people I think are most impactful report being really helped by some rationalist-sphere materials and projects. Also, it seems like there are a lot of projects aimed at sharing EA-related content with newer EAs, but much less in the way of support and encouragement for practicing the thinking tools I believe are useful for maximizing one’s impact (e.g. making good expected-value and back-of-the-envelope calculations, gaining facility for probabilistic reasoning and fast Bayesian updating, identifying and mitigating one’s personal tendencies towards motivated or biased reasoning). I’m worried about a glut of newer EAs adopting EA beliefs but not being able to effectively evaluate and critique them, nor to push the boundaries of EA thinking in truth-tracking directions.

  2. ^

    EA community actually inherited more than just ideas about epistemics: compare for example Eliezer Yudkowsky's essay on Scope Insensitivity from 2007 with current introductions to effective altruism in 2024.

136

33
7

Reactions

33
7
Comments33
Sorted by Click to highlight new comments since:

I think two things are being conflated here into a 3rd position no one holds

-Some people don't like the big R community very much.

-Some people don't think improving the world's small-r rationality/epistemics should be a leading EA cause area.

Are getting conflated into:

-People don't think it's important to try hard at being small-r rational. 

 

I agree that some people might be running together the first two claims, and that is bad, since they are independent, and it could easily be high impact to work on improving collective epistemics in the outside world even if the big R rationalist community was bad in various ways. But holding the first two claims (which I think I do moderately) doesn't imply the third. I think the rationalists are often not that rational in practice, and are too open to racism and sexim. And I also (weakly) think that we don't currently know enough about "improving epistemics" for it to be a tractable cause area. But obviously I still want us to make decisions rationally, in the small-r sense internally. Who wouldn't! Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand. 

I don't think so. I think in practice

I. - Some people don't like the big R community very much.

AND

2a. - Some people don't think improving the EA community small-r rationality/epistemics should be one of top ~3-5 EA priorities. 
OR
2b.  - Some people do agree this is important, but don't clearly see the extent to which the EA community imported healthy epistemic vigilance and norms from Rationalist or Rationality-adjacent circles

=>

- As a consequence, they are at risk of distancing from small r rationality as a collateral damage / by neglect


Also I think many people in the EA community don't think it's important to try hard at being small-r rational at the level of aliefs.  No matter what is the actual situation revealed by actual decisions, I would expect the EA community to at least pay lip service to epistemics and reason, so I don't think stated preferences are strong evidence. 

"Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand." 
Yes I do agree almost no one thinks about themselves that way. I think it is maybe somewhat similar to "Being against effective charity" - I would be surprised if people though about themselves that way. 

Eh, I agree with you that LW-style rationalists are far from sinless in this regard, but it's hard to not notice that many people, including on EAF, seem to have a strong revealed preference for irrationality. 

I'm not sure why; one guess I have is that people (subconsciously) correctly identify rational irrationality as the best strategy to come across as loyal to one's tribe. I find this sad, but I don't have a real answer here; the incentives are strong and point in the wrong direction. 

In my ideal culture, everybody will be polite about it, but sloppy thinking will still be heavily censured, rather than rewarded. 

 

(slightly feverish, apologies if I'm not making as much sense, ironically). 

What instances do you have in mind by "strong revealed preference for irrationality"?

On LW, I thought comments here were very poor, with a few half-exceptions. It wasn't even a controversial topic! 

On EAF, I pragmatically am not that interested in either starting new fights, or relitigating past ones. I will say that making my comment here solely about kindness, rather than kindness and epistemics, was a tactical decision. 

Good Ventures have stopped funding efforts connected with the rationality community and rationality

 

Since that post doesn't specify specific causes they are exiting from, could you clarify if they specified that they are also not funding lower case r "rationality"?

Um, I did not know about "came in fluffer" until I googled it now, inspired by your post. I'm not English, so I thought "fluffer" meant some type of costume, and that some high-status person showed up somewhere in it. My innocence didn't last long.

I'm not against sexual activities, per se, but do you really want to highlight and reinforce that as a salient example of "Rationality culture"?

Given the post is broadly negative about Rationality culture, choosing an obnoxious, sexual, and niche example strikes me as likely deliberate. 

However, distancing yourself from 'small r' rationality is far more radical and likely less considered.

 

Could you share some examples of where people have done this or called for it? 

From what I've seen online and the in person EA community members I know, people seem pretty clear about separating themselves from the Rationalist community. 

It would be indeed very strange if people made the distinction, thought about the problem carefully, and advocated for distancing from 'small r' rationality in particular.

I would expect real cases to look like
- someone is deciding about an EAGx conference program; a talk on prediction markets sounds subtly Rationality-coded, and is not put on schedule
- someone applies to OP for funding to create rationality training website; this is not funded because making the distinction between Rationality and rationality would require too much nuance
- someone is deciding about what intro level materials to link to; some links to LessWrong are not included

The crux is really what's at the end of my text - if people do steps like above, and nothing else, they are distancing also from the 'small r' thing. 

Obviously part of the problem for the separation plan is Rationality and Rationality-adjacent community actually made meaningful progress on rationality and rationality education; a funny example here in the comments ... Radical Empath Ismam advocates for the split and suggests EAs should draw from the "scientific skepticism" tradition instead of Bay Rationality. Well, if I take that suggestion seriously, and start looking for what could be good intro materials relevant to the EA project (which "debunking claims about telekinesis" advocacy content probably isn't) .... I'll find New York City Skeptics and their podcast, Rationally Speaking. Run by Julia Galef, who also later wrote Scout Mindset. Excellent. And also, co-founded CFAR. 

Yeah for sure I don't really understand how you could be an Effective Altruist without implementing a heavy dose of "small r" rationality. I agree with the post and think its a really important point to make and consolidate, but I don't think people are really calling for being less rational...

I don't find the concept of small "r" rationalist helpful because what you describe to me actually sounds like "understand most of Kahneman and Traversky's work" and I wouldn't refer to that as rationalism but cognitive psychology. I think in general even small r rationalism tries to repackage concepts in ways that are only new or interesting to people who haven't studied psychology and in my opinion does so mostly in very distinct ways that tend to have non-stated underlying philosophical assumptions like objectivism and Kantian ideals. But cognitive psych doesn't (shouldn't?) have to be applied in those ways. Probably just read Joshua Greene's Moral Tribes and get on with your day? That's how I got into EA, and it does whatever ur describing as small r rationalism better than small r rationalism (if that makes sense?) without all the underlying non-stated assumptions that comes with small r rationality and the ties with the big R community

Reducing rationality to "understand most of Kahneman and Tversky's work" and cognitive psychology would be extremely narrow and miss most of the topic.

To quickly get some independent perspective, I recommend reading the Overview of the handbook part of "The Handbook of Rationality"  (2021, MIT Press, open access). For an extremely crude calibration: the Handbook has 65 chapters. I'm happy to argue at least half of them cover topics relevant to the EA project. About ~3 are directly about Kahneman and Tversky's work. So, by this proxy, you would miss about 90% of whats relevant.


 

Yeah I guess I'm saying probably the rest is not relevant or important for EA and that's why I think little r rationality can be scrapped in favor of the important bits I highlighted. I realize I left out epistemology so like, just study epistemology and cognitive psych and that is the relevant bit for EA (in an admittedly oversimplified way to make a point).

This is a good account of what EA gets from Rationality, and why EAs would be wise to maintain the association with rationality, and possibly also with Rationality.

What does Rationality get from EA, these days? Would Rationalists be wise to maintain the association with EA?

Rationality has supported and been supported by EA a bunch. In that time, Rationality+EA has caused a bunch of harm (I’m not certain about net harm, but I do think a bunch of harm has happened: supporting scaling labs, supporting SBF, low integrity political manoeuvring (I hear)). I think Rationality should own its relationship to EA and its mixed legacy.

There's no reason to blame the Rationalist influence on the community for SBF that I can see. What would the connection be?

IIRC, while most of Alameda's early staff came from EA, the early investment came largely from Jaan Tallinn, a big Rationalist donor. This was a for-profit investment, not a donation, but I would guess that the overlapping EA/Rationalist social networks made the deal possible.

That said, once Bankman-Fried got big and successful he didn't lean on Rationalist branding or affiliations at all, and he made a point of directing his "existential risk" funding to biological/pandemic stuff but not AI stuff.

I think Rationality provided undirected support to EA during that period (sharing goodwill and labour, running events together), and received funding from EA funders, and so is not clean of the stuff listed in my comment. I think it probably overall made those things worse by supporting EA more, even if it helped the bad things somewhat less than it helped the good things.

Is it me or there is too much filler on some posts? This could have been a quicktake: "If you distance yourself from Rationality, be careful to not distance yourself from rationality".

I'm in favour of distancing "Rationality" from EA.

As I've said elsewhere, Less-Wrong "rationality" isn't foundational to EA and it's not even the accepted school of critical thinking.

For example, I personally come from the "scientific skepticism" tradition (think Skeptics Guide to the Universe, Steven Novella, James Randi, etc...), and in my opinion, since EA is simply scientific skepticism applied to charity, scientific skepticism is the much more natural basis for critical thinking in the EA movement than LW.

I've been in the EA movement for a long time and I can attest Rationality did not play any part in the EA movement in the early days.

I've been in the EA movement for a long time and I can attest Rationality did not play any part in the EA movement in the early days.

This is clearly wrong. You can watch talks or read about the history of the EA community by Toby or Will, and they will be clear that the Rationality community was a core part of the founding of the EA community. 

There are parts of the EA community (especially in the UK) that interfaced less, but there was always very substantial entanglement.

The post you linked to from Will MacAskill ("The history of the term 'effective altruism'" from 2014) doesn't reference the Rationality community (and the other links you included are to posts or pages that aren't from Will or Toby, but by Jacy Reese Anthis and some wiki-style pages). 

Do you have examples or links to talks or posts on EA history from Toby and Will that do discuss the Rationality community? (I'd be curious to read them. Thanks!)

@RobBensinger had a useful chart depicting how EA was influenced by various communities, including the rationalist community.

I think it is undeniable that the rationality community played a significant part in the development of EA in the early days. I’m surprised to see people denying this.

What seems more debatable is whether this influence is best characterized as “rationalism influenced EA” rather than “both rationalism and EA emerged to a significant degree from an earlier and broader community of people that included a sizeable number of both proto-EAs and proto-rationalists”.

Another podcast linked below with some details about Will and Toby's early interactions with the Rationality community. Also Holden Karnofsky has an account on LW, and interacted with the Rationality community via e.g. this extensively discussed 2011 post.

https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/

Will MacAskill: But then the biggest thing was just looking at what are the options I have available to me in terms of what do I focus my time on? Where one is building up this idea of Giving What We Can, kind of a moral movement focused on helping people and using evidence and data to do that. It just seemed like we were getting a lot of traction there.

Will MacAskill: Alternatively, I did go spend these five-hour seminars at Future of Humanity Institute, that were talking about the impact of superintelligence. Actually, one way in which I was wrong is just the impact of the book that that turned into — namely Superintelligence — was maybe 100 times more impactful than I expected.

Rob Wiblin: Oh, wow.

Will MacAskill: Superintelligence has sold 200,000 copies. If you’d asked me how many copies I expected it to sell, maybe I would have said 1,000 or 2,000. So the impact of it actually was much greater than I was thinking at the time. But honestly, I just think I was right that the tractability of what we were working on at the time was pretty low. And doing this thing of just building a movement of people who really care about some of the problems in the world and who are trying to think carefully about how to make progress there was just much better than being this additional person in the seminar room. I honestly think that intuition was correct. And that was true for Toby as well. Early days of Giving What We Can, he’d be having these arguments with people on LessWrong about whether it was right to focus on global health and development. And his view was, “Well, we’re actually doing something.”

Rob Wiblin: “You guys just comment on this forum.”

Will MacAskill: Yeah. Looking back, actually, again, I will say I’ve been surprised by just how influential some of these ideas have been. And that’s a tremendous testament to early thinkers, like Nick Bostrom and Eliezer Yudkowsky and Carl Shulman. At the same time, I think the insight that we had, which was we’ve actually just got to build stuff — even if perhaps there’s some theoretical arguments that you should be prioritising in a different way — there are many, many, positive indirect effects from just doing something impressive and concrete and tangible, as well as the enormous benefits that we have succeeded in producing, which is tens to hundreds of millions of bed nets distributed and thousands of lives saved.

I'm not sure which, but in one of Will's 80k podcast interviews he discusses the origins of EA and mentions Yudkowsky and LessWrong as one of three key strands (as well as the GWWC crew in Oxford and Holden/GiveWell).

https://80000hours.org/podcast/episodes/will-macaskill-moral-philosophy/

Robert Wiblin: We’re going to dive into your philosophical views, how you’d like to see effective altruism change, life as an academic, and what you’re researching now. First, how did effective altruism get started in the first place?

Will MacAskill: Effective altruism as a community is really the confluence of 3 different movements. One was Give Well, co-founded by Elie Hassenfeld and Holden Karnofsky. Second was Less Wrong, primarily based in the bay area. The third is the co-founding of Giving What We Can by myself and Toby Ord. Where Giving What We Can was encouraging people to give at least 10% of their income to whatever charities were most effective. Back then we also had a set of recommended charities which were Toby and I’s best guesses about what are the organizations that can have the biggest possible impact with a given amount of money. My path into it was really by being inspired by Peter Singer and finding compelling his arguments that we in rich countries have a duty to give most of our income if we can to those organizations that will do the most good.

There was a talk by Will and Toby about the history of effective altruism. I couldn't find it quickly when I wrote the above comment, but now found it:

I was involved in the EA movement from around 2014 in Sydney Australia, which I expect is similar to the UK as you mentioned (but all the same, the UK & Australia were and are major centres for the EA movement and our lack of interaction with the Rationalist community in those early days should be noted).

From my recollection, in those early days the local EA Sydney community would co-host events with the local Less Wrong Rationality, Transhumanist, and the Science Party groups just to get the numbers up for events. So yes, the Rationalists did mix with EA, but their contribution was on-par with the Transhumanists.

I don't recall rationality being a major part of major EA literature at the time (The Most Good You Can Do, The Life You Can Save). Even utilitarianism was downplayed as being fundamental to being an EA. It was later on that Rationality became more influential.

Sorry for sarcasm, but what about returning to the same level of non-involvement and non-interaction between EA and Rationality as you describe was happening in Sydney? I.e. EA events are just co-hosted with LW Rationality and Transhumanism, and the level of Rationality idea non-influence is kept on par with Transhumanism? 

So on the object level I think we all agree: ea Sydney was having cohosted events with the rationalists in 2014.

It just seems odd to me to describe this as the influence not being important. But this might be that we simply have a difference about what 'important influence' implies.

I don’t think Sydney has ever been a major centre for the EA movement, and it’s not a very good proxy for the culture/s of major EA hubs.

Curated and popular this week
Relevant opportunities