Summary: Emrik and I discussed ways to improve EA. We mainly discussed my proposal for more debating plus organizing debate better. Debate is an asymmetric weapon unlike stuff related to social status, popularity and influence. We also went through my flowchart of a debate methodology I developed. The call had a friendly vibe, and was mostly about explaining and sharing ideas, with only a little debating.
Watch the discussion on YouTube
Outline of tree discussed in call
Truth Seeking vs. Status/Influence
(Everything said about Effective Altruism (EA) applies to many other groups. These are widespread issues.)
by Elliot Temple Nov 2022 https://criticalfallibilism.com
- Improving EA
- I’d like to improve EA a large amount. I have ideas for how to do that.
- But if you just go to EA with an idea like that, one of the main things that happens is … nothing.
- How do you get attention for a great idea?
- Truth Seeking Processes
- What makes a process truth-seeking?
- True ideas have a large advantage
- Example: rational, productive debate
- You need a process that does critical analysis. And realistically it needs to involve people talking with each other some.
- That basically means critical discussion or debate. People need to think critically about ideas and talk about their criticisms. Then true ideas have a large advantage because they will do better in critical analysis. (That’s not a guarantee, but it sure helps compared to other approaches that don’t significantly advantage true ideas.)
- You need a process that does critical analysis. And realistically it needs to involve people talking with each other some.
- Example: rational, productive debate
- Truth-seeking processes are symmetrical
- That means they work well regardless of which idea or side is correct
- A truth seeking process should get a good outcome, for everyone, if I’m right or if EA is right
- If whoever turns out to be wrong is disliked, mocked, viewed as low status, punished, etc., that’s not truth-seeking
- Being wrong shouldn’t be shameful
- (Bad faith, bad intentions, dishonesty, falsifying data, etc., are different than merely being wrong.)
- Being wrong shouldn’t be shameful
- If whoever turns out to be wrong is disliked, mocked, viewed as low status, punished, etc., that’s not truth-seeking
- A truth seeking process should get a good outcome, for everyone, if I’m right or if EA is right
- That means they work well regardless of which idea or side is correct
- True ideas have a large advantage
- What if my suggestions for EA are incorrect?
- A truth-seeking process should work about equally well whether I’m right or wrong. It should get a good outcome either way. If I’m right, it should have a good chance to figure that out. If instead EA is right, it should have a good chance to figure that out.
- If I’m right, a truth-seeking process should (probably) lead to EA changing, learning, reforming.
- If I’m wrong, a truth-seeking process should lead to me (probably) changing, learning, reforming.
- Whoever is wrong gets the better deal: they get to learn something. (If their error is known. This is all based on the best knowledge anyone has, not omniscience.)
- Ignoring wrong-appearing ideas has two main problems.
- First, you’re fallible. The ideas you ignore might actually be true.
- Second, you’re not giving the mistaken critic any way to change his mind and learn better. An opportunity for progress is lost, and he won’t become your supporter – he’ll instead go around telling people that he criticized your group, and you wouldn’t give any counter-arguments, so you’re clearly irrational and probably wrong. Reasonable people will be alienated from your group.
- First, you’re fallible. The ideas you ignore might actually be true.
- Ignoring wrong-appearing ideas has two main problems.
- Whoever is wrong gets the better deal: they get to learn something. (If their error is known. This is all based on the best knowledge anyone has, not omniscience.)
- If I’m wrong, a truth-seeking process should lead to me (probably) changing, learning, reforming.
- If I’m right, a truth-seeking process should (probably) lead to EA changing, learning, reforming.
- A truth-seeking process should work about equally well whether I’m right or wrong. It should get a good outcome either way. If I’m right, it should have a good chance to figure that out. If instead EA is right, it should have a good chance to figure that out.
- What makes a process truth-seeking?
- Social/Influence Processes
- Being influential is something that (people with) false ideas can do well at
- Getting high social status is something that (people with) false ideas can do well at
- Marketing is something that (people with) false ideas can do well at
- Making friends is something that (people with) false ideas can do well at
- Establishing rapport is something that (people with) false ideas can do well at
- Creating a social network is something that (people with) false ideas can do well at
- Gaining karma/upvotes is something that (people with) false ideas can do well at
- Becoming popular is something that (people with) false ideas can do well at
- Making a positive first impression is something that (people with) false ideas can do well at
- Fitting in well with other EAs is something that (people with) false ideas can do well at
- Impressing people is something that (people with) false ideas can do well at
- Appearing smart is something that (people with) false ideas can do well at
- Writing post titles that people will click on (and won’t dislike as “clickbait”) is something that (people with) false ideas can do well at
- Posting frequently and repetitively, on a forum where old posts get little attention, is something that (people with) false ideas can do well at
- Appearing plausible to people is a social skill
- Emoting in ways people like is a social skill. E.g. how do you come off ambitious instead of arrogant? Getting that right is different than truth-seeking
- Becoming popular is something that (people with) false ideas can do well at
- Gaining karma/upvotes is something that (people with) false ideas can do well at
- Creating a social network is something that (people with) false ideas can do well at
- Establishing rapport is something that (people with) false ideas can do well at
- Making friends is something that (people with) false ideas can do well at
- Marketing is something that (people with) false ideas can do well at
- Getting high social status is something that (people with) false ideas can do well at
- Being influential is something that (people with) false ideas can do well at
- Truth Seeking Processes
- How do you get attention for a great idea?
- But if you just go to EA with an idea like that, one of the main things that happens is … nothing.
- I’d like to improve EA a large amount. I have ideas for how to do that.
Important point I forgot to say on the call
During the call, I got distracted and never said a key point. debate is for 1) reforming EA itself (i said that) and 2) setting a good example so that other groups will listen to reason (we didn't talk about this)
spreading good debate norms to other groups would let any good arguments, including EA's, have much more impact. imagine if 10% of all charities were open to debate and would change to more cost-effective approaches due to rational arguments. imagine if companies and politicians were open to debate.
EA currently has HUGE problems with most of it's best ideas being ignored – without counter-arguments – by almost everybody. this is so normalized that i think EAs don't notice and just take it for granted as how the world is.
i think this problem is fixable. if one decently sized group like EA was willing to become open to debate, i think that could show people the way and spread to other groups.
Put another way, i think getting EA to do rational debate is a harder problem than getting other groups to start doing it after EA. I don't think people should be put off because it seems hard to get other people/groups to be rational; if they'd actually go first, and get it right themselves, i think that's the key issue. in other words, scaling rational debate from 1 person to 10,000 is hard. scaling from 10,000 to millions is easier. you don't need to worry so much about the mass adoption problem. it's the early adoption problem that's more important. getting to 10k might not even be hard if it got to 100 so there were many positive examples (productive debates) and it was hard to just ignore without engaging.
BTW does anyone want to debate with me?
Really intrigued by the idea of debates! I was briefly reluctant about the concept at first, because what I associate with "debates" is usually from politics, religious disputes, debating contests, etc. where the debaters are usually lacking so much of essential internal epistemic infrastructure that the debating format often just makes it worse. Rambly, before I head off to bed:
Part of what's going on here is that Popperian epistemology says, in brief summary, we learn by critical thinking and debate (both within our mind and with others). Bayesian epistemology does not say that. It (comparatively) downplays the roles of debate and criticism.
In the Popperian view, a rational debate is basically the same process as rational thinking but externalized to involve other people. Or put the other way around, trying to make critical arguments about ideas in your head that you're considering is one of the main aspects of thinking.
I'm unaware of any Bayesian who claims to have adequate knowledge of Popper who has written some kind of refutation of Popperian epistemology, or who endorses and takes responsibility for a particular refutation written by someone familiar with Popper's views. This is asymmetric. Popper wrote refutations of Bayesian ideas and generally made a significant effort to critically analyze other schools of thought besides his own and to engage with critics.
The things I'm most interested in debating are broad, big picture issues like about debate methodology or what the current state of the Popper/Bayes debate is (e.g. what literature exists, what is answered by what, what is unanswered). Attempts to debate other topics will turn into big picture discussions anyway because I will challenge premises, foundations or methodology.
The debate topic doesn't really matter to me because, if it isn't one of these big picture issues, I'll just change the topic. The bigger picture issues have logical priority. Reaching a conclusion related to e.g. poverty depends on debate and thinking methodology, what epistemology is correct, what knowledge is, what is a good argument, what does it take to reach a conclusion about an issue, how should people behave during debates, when should people use literature references or write fresh arguments, etc. I don't want to name some attention-getting issues as potential debate topics and then effectively bait-and-switch people by only talking philosophy. I'll either talk about the issues I think have logical priority or else, if we disagree about that, then about which issues have logical priority and why. Either way it'll be fairly far removed from any EA causes, though it'll have implications for EA causes.