(Almost) everyone's beliefs have massive amounts of unresolved contradictions, especially on political and ethical questions. The most reproducible evidence for this is the massive framing effects in public opinion surveys, where people's opinions on important political issues vary widely when the question is asked in a slightly different way.
An edgy thirteen-year-old will take this argument and say "Clearly, everyone outside of <MY SUBCULTURE> is an incompetent moron who can't be trusted with any real responsibility. But I want to argue the opposite! (Almost) no-one has a fully self-consistent world view. And yet most people are still able to lead their lives, make reasonable decisions, and work together to build a deeply-flawed-but-still-very-impressive society!
A big part of this is that most people rely on intuitive reasoning to make decisions. These decisions are far from perfect - otherwise there would be no overly-neglected caused for EAs to work on. But they tend to be reasonable - they mostly don't have drastic negative consequences. Most people are able to make mostly-reasonable intuitive decisions despite massive contradictions in their beliefs.
For example, many people, when asked directly, would say that that overpopulation is a massive risk to human prosperity. But very very few will take that view to its conclusion and proactively embark on a compulsory mass sterilization program.
(I don't want to be overly sanguine about the state of the world today. There are, of course, many people who make decisions with disastrous consequences, incredible amounts of unneeded suffering and far too many people who do perform mass sterilization programs. My view is driven by pessimism - with billions of agents with inconsistent beliefs, it's easy to see that things could be much worse).
EA is a difficult movement to criticize. It's reasonable to say that EA is just a question: "How do we do the most good?". I think it's more accurate to say that EA is a combination of the question and a set of epistemic tools for answering that question.
One epistemic tool that Effective Altruism loves is a long multi-stage argument that depends on a long chain of logic. EAs love these arguments even more if they lead to an unintuitive conclusion. The AI Lethalities [1] post is a great recent example of this tool.
Logical reasoning is great, but it only works if you start with true statements. The Principle of Explosion roughly states that a chain of logical reasoning that starts with a contradiction or false assumption can "prove" any false statement. In many cases, you can get a reasonable-sounding proof which will hold up to all but the tightest inspection. If you're making an argument about AI safety, your starting assumptions are likely about consciousness or the limits of superintelligence. These assumptions are very hard to reason about, and you can very easily have hard-to-notice errors that lead to wild (and potentially dangerous conclusions).
Most people wouldn't fall into these traps. Most non-EA people, when they hear a long chain of arguments, will intuitively sanity-check the conclusion. If the conclusion doesn't match their intuition, they will probably not accept the argument, even if they can't find a definite flaw. Willingness to accept unintuitive truths is fundamental to what makes EA great. But believing unintuitive false conclusions can be incredibly dangerous.
It's very easy for me to imagine an EA supervillain, who buys a false-but-convincing argument and creates a massive amount of suffering. That's a fundamental flaw in the epistemic tools used by EA.
- ^
I disagree with some of Yudowski's points here, but this is an excellent post and I'm not trying to pick on it. I think this is an excellent example of a tendency in EA thought.
My answer to why people have mostly been able to cooperate despite inconsistent beliefs is that most people have low impact, ala power laws. This means that most intuitions agree with each other.
It's in the high-end technological regime where vast differences between people's morals start mattering. To put it in Scott Alexander's terms, it's the difference between: "Mediocristan is like the route from Balboa Park to West Oakland, where it doesn’t matter what line you’re on because they’re all going to the same place. Then suddenly you enter Extremistan, where if you took the Red Line you’ll end up in Richmond, and if you took the Green Line you’ll end up in Warm Springs, on totally opposite sides of the map."
Or to put it another way, most people don't have major power in life, but that will become a big problem as technology advances.