Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
I don't doubt that there are better ways of characterising the situation.
However, I do think there is a divide between those that prioritise epistemics and those that prioritise optics/social capital, when push comes to shove.
I did try to describe the two sides fairly, ie. "saving us from losing our ability to navigate" vs. "saving us from losing our influence". I mean both of these sound fairly important/compelling and plausibly could cause the EA movement to fail to achieve its objectives. And, as someone who did a tiny bit of debating back in the day, I... (read more)
Yeah, I'm not saying there is zero divide. I'm not even saying you shouldn't characterize both sides. But if you do, it would be helpful to find ways of characterizing both sides with similarly positively-coded framing. Like, frame this post in a way where you would pass an ideological turing test, i.e. people can't tell which "camp" you're in.
The "not racist" vs "happy to compromise on racism" was my way of trying to illustrate how your "good epistemics" vs "happy to compromise on epistemics" wasn't balanced, but I could have been more explicit in this.
Saying one side prioritizes good epistemics and the other side is happy to compromise epistemics seems to clearly favor the first side.
Saying one side prioritizes good epistemics and the other side prioritizes "good optics" or "social capital" (to a weaker extent) seems to similarly weakly favor the first side. For example, I don't think it's a charitable interpretation of the "other side" that they're primarily doing this for reasons of good optics.
I also think asking the question more generally is useful.
For example, my sense is also that your "camp" still strongly values social capital, just a different kind of social capital. In... (read more)