Econ PhD student at Oxford and research associate at the Global Priorities Institute. I'm slightly less ignorant about economic theory than about everything else.
Thanks!
No actually, we’re not assuming in general that there’s no secret information. If other people think they have the same prior as you, and think you’re as rational as they are, then the mere fact that they see you disagreeing with them should be enough for them to update on. And vice-versa. So even if two people each have some secret information, there’s still something to be explained as to why they would have a persistent public disagreement. This is what makes the agreement theorem kind of surprisingly powerful.
The point I’m making here though is that you might have some “secret information” (even if it’s not spelled out very explicitly) about the extent to which you actually do have, say, a different prior from them. That particular sort of “secret information” could be enough to not make it appropriate for you to update toward each other; it could account for a persistent public disagreement. I hope that makes sense.
Agreed about the analogy to how you might have some inside knowledge about the extent to which your movement has grown because people have actually updated on the information you’ve presented them vs. just selection effects or charisma. Thanks for pointing it out!
Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written.
On the point about "EA tenets": if you mean normative tenets, then yes, how much you want to update on others' views on that front might be different from how much you want to update on others' empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing--along the lines of this post, say) or more like preferences (in which case you don't). My own guess is that they're more like beliefs--i.e. we should take the fact that most people reject temporal impartiality as at least some evidence against longtermism--but thanks for noting that there's a distinction one might want to make here.
On the three bullet points: I agree with the worries on all counts! As you sort of note, these could be seen as difficulties with "implementing the policy" appropriately, rather than problems with the policy in the abstract, and that is how I see them. But I take the point that if an idea is hard enough to implement then there might not be much practically to be learned from it.
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
I'm a bit confused by this. Suppose that EA has a good track record on an issue where its beliefs have been unusual from the get-go.... Then I should update towards deferring to EAs
I'm defining a way of picking sides in disagreements that makes more sense than giving everyone equal weight, even from a maximally epistemically modest perspective. The way in which the policy "give EAs more weight all around, because they've got a good track record on things they've been outside the mainstream on" is criticizable on epistemic modesty grounds is that one could object, "Others can see the track record as well as you. Why do you think the right amount to update on it is more than they think the right amount is?" You can salvage a thought along these lines in a epistemic-modesty-criticism-proof way, but it would need some further story about how, say, you have some "inside information" about the fact of EAs' better track record. Does that help?
Your quote is replying to my attempt at a "gist", in the introduction--I try to spell this out a bit more further in the middle of the last section, in the bit where I say "More broadly, groups may simply differ in their ability to acquire information, and it may be that a particular group’s ability on this front is difficult to determine without years of close contact." Let me know if that bit clarifies the point.
Re
I currently don't think that epistemic deference as a concept makes sense, because defying a consensus has two effects that are often roughly the same size ,
I don't follow. I get that acting on low-probability scenarios can let you get in on neglected opportunities, but you don't want to actually get the probabilities wrong, right?
In any event, maybe messing up the epistemics also makes it easier for you to spot neglected opportunities or something, and maybe this benefit sometimes kind of cancels out the cost, but this doesn't strike me as relevant to the question of whether epistemic deference as a concept makes sense. Startup founders may benefit from overconfidence, but overconfidence as a concept still makes sense.
Would you have a moment to come up with a precise example, like the one at the end of my “minimal solution” section, where the argument of the post would justify putting more weight on community opinions than seems warranted?
No worries if not—not every criticism has to come with its own little essay—but I for one would find that helpful!
Yup!