This is an article from my Substack that I thought should be posted here. I would love to get some feedback on what y'all think about the general critique -- comments, criticisms, and more! 

13

0
4

Reactions

0
4
Comments3
Sorted by Click to highlight new comments since:

Hi Noah, since I drew the "potential rebuttal" to your attention, could you update your post with the link? Good citation practice :-)

Also, fwiw, I find the clickbaity title rather insulting. It's not really true that being willing to revise some commonsense moral assumptions in light of powerful arguments automatically makes one "bad at moral philosophy". It really depends on the strength of the arguments, and how counterintuitive it would be to reject those premises. Common sense is inconsistent, and the challenge of moral philosophy is to work out how best to resolve the conflicts. You can't do that without actually looking into the details.

First of all, you are correct -- I will do that now. 
Second, yea I hear that - I will revise

I think the underlying issue here is that no one really has a good method for how much to weight individual case judgments versus judgments about general principles, or even an idea about how we'd find such a method. Though I agree it is worrying that EA is at an extreme in the weight it gives to general principles; it does seem like "give both weight" is more sensible. I am not a pure utilitarian or hedonist because sometimes no amount of argument will make me give up on a specific case judgments: i.e. that there's no degree to which you enjoy torturing someone that makes torturing them ok, even if we stipulate no negative indirect effects of the torture, or that it wouldn't be bad for me if all my family and friends were replaced by unfeeling, unconscious robots that were behaviorally identical to the originals and I never found out). (Although in saying that I am actually going beyond just "give some weight to case judgments as well as general principles" and to "prioritize these case judgments over general principles absolutely", which is presumably in some sense just as "bad" as "prioritize general principles absolutely over case judgments".) 

But "reflective equilibrium" isn't really a method that "real" moral philosophers employ to reach highly justified conclusions. It's just the claim that you should give some weight to both general principles and specific case judgments, which doesn't tell you what to do in a specific case where they conflict . I'd also say that at least some utilitarians claim that their views do in fact do well at accommodating our intuitions once you move beyond "naive" utilitarianism, because ruthless, conventional immoral maximizing behaviour is highly likely to fail to actually maximize utility when attempted by fallible humans. Now, that claim may be incorrect, but even if it's totally wrong, the utilitarians who say that aren't making a second mistake of giving intuitions about particular cases zero weight, in addition to their first mistake of thinking utilitarianism scores well on accommodating our intuitions. 

I'd also say that many of the abstract principles which (taken together) generate weird or absurd seeming EA conclusions are very weak and hard to deny. They are not things like "utilitarianism is true" but rather more like "adding extra happy people to the world without hurting anyone else is fine, and increasing equality whilst increasing average standard of living and changing nothing else is an improvement, even if some people lose out very slightly" or "If A is better than B and B is better than C, then A is better than C". See for example:

 https://www.goodthoughts.blog/p/puzzles-for-everyone?r=jitor&s=w&utm_campaign=post&utm_medium=web   https://users.ox.ac.uk/~mert2060/webfiles/Reconstructing-Arrhenius-for-web.pdf

This is actually one of the most interesting things you learn from studying philosophy in my view. When you first start studying it's very natural (at least if your the kind of person who likes analytic philosophy*) to think that highly abstract principles that feel "true by definition" are somehow "more certain" than even very obvious everyday beliefs like "I know lots of other human beings exist" or "I can see a tree outside my window right now." After all, you can come up with a scenario that makes internal sense where your wrong about the latter like "I am actually in a virtual reality with only 1 real  human inhabitant". But it's hard to even make sense of what it would be for claims like "if A is better than B and B is better than C" or "nothing can be both true and false" or 'A sentence "p" is true iff and only if p"* to be wrong. But once you start doing philosophy, you quickly discover you can't actually hold onto to all obvious claims like this, because they are sometimes incompatible or lead to absurd consequences. (For example, the liar paradox shows you can't hold onto both the first and the third claim and also a bunch of other equally obvious sounding logic stuff, at the same time.) So after awhile, you realize some very abstract claims of this sort must be wrong after all. So it seems like you should probably be less confident in them in practice, than you are of everyday empirical claims that you would never in practice doubt. (Though there reasons some philosophers would deny this I suspect.) A worry about EA and "principles" is that it hasn't learnt this lesson for cases when what clashes with super-abstract principles is not everyday empirical stuff, but rather apparently obvious ethical judgments about particular cases. We know from of the investigations into infinite ethics by various EAs that you can't actually hold onto to ALL apparently obvious ethical general principles at the same time, so some must be wrong: https://jc.gatspress.com/pdf/infinite_ethics_revised.pdf Which in turn casts doubt on the idea that super-abstract principles that appear "certain" are better known than ordinary case judgments more generally. (If some principles that look completely beyond doubt must be wrong, clearly appearing beyond doubt in this way does not establish that a principle is correct with 100% certainty.) 


*https://en.wikipedia.org/wiki/T-schema

Curated and popular this week
Relevant opportunities