Hide table of contents

The more I think about value monism, I get confused about why some people really want to cling to it, even though our own experience seems to tell us every day that we are in fact not value monists. We care about many different values and also care about what values other people hold. When we ask people who are dying most of them will talk of friendship, love, and regrets. Does all of this just count instrumentally toward one "super value" such as welfare or are there some values we hold dear as ends in themselves? 
I came up with a short experiment that can maybe act as an intuition pump in this regard. I would be interested in your thoughts!

Thought experiment: What do we care about at the end of time?

We are close to the end of time. Humanity gained sophisticated technologies we can only imagine. Still, only two very old humans remain alive: Alice and Bob. However, there also remain machines that can predict the effects of medicines and states of consciousness and lived experience.

It seems like the last day for both Alice and Bob has come. Alice is terminally ill and in severe pain, Bob is simply old but also feels he is about to die a peaceful death soon. They have used up almost all of the medicine which was still around, only one dose of morphine remains.

The medical machines tell them that if Alice takes the morphine her pain would be soothed but the effect would not be as strong as normally due to her specific physiology which dampens the effect of morphine. Bob on the other hand would have a really great time if he took the morphine. His specific physiology is super receptive to morphine. He would experience unimaginable heights and states of bliss. The medical machines are entirely sure that net happiness would be several times higher if Bob would take the morphine. If Alice would take it, they would simply have one last conversation and both die peacefully. 

How should Alice and Bob decide? What values are important in their decision? 

2

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

I take the strongest argument for value monism to be something like this: if you have more than one value, you need to trade them off at some point. Given this, how do you decide the exchange rate? Either there is no principled exchange rate, in which case you can’t decide any principled way to trade them off and there is no principled reason to invoke any more than one value when making a decision anyway, which defeats the original intuition for why one would want to recognize more values, or there is some commonality between these values that can determine the exchange, in which case, as it turns out, that is the true intrinsic value, not either of the ones being exchanged against one another. This dilemma always applies when trading off more than one value, so the principled solution will always tend to be finding one common value. There are of course various counterarguments, but hopefully this helps understand why people are drawn to it.

I mean, I do get the appeal. But as you say it also has pretty huge drawbacks. I am curious how far people are willing to tie themselves to the mast and argue that value monism is actually a tenable position to take as a "life philosophy" despite it's drawbacks. How far are you willing to defend your "principles" even if the situation really calls them into question? What would your reply to the thought experiment be?

2
Devin Kalish
The scenario given doesn’t seem to pump the intuition for value pluralism so much as prioritarianism. I suppose you could conceptualize prioritarianism as a sort of value pluralism, I.e. the value of helping those worse off and the value of happiness, but you can also create a single scale on which all that matters is happiness but the amount that it matters doesn’t exactly correspond to the amount of the happiness. I at least usually think of it as importantly distinct from most plural value theories. I’m open to the possibility that this is just semantics, but it does seem to avoid some dilemmas typical plural value theories have (though not all). More on the topic of what to do about counterintuitive implications, my approach is fairly controversial, in that I mostly say if you can’t bite the bullet, don’t, but don’t revise your theory to take the bullet away. In part this just seems like a more principled approach to me as a rule, but also there are important areas of ethics, like aggregation or population axiology, where basically no good answers exist, and this is pretty much provable. This is just the nature of ethics once you get really deep into the weeds. My impression is that most philosophers respond to this by not endorsing complete theories, basically they just endorse certain specific principles that don’t come with serious bullets, and put off other questions where they don’t see a way to escape the bullets. I don’t think this ultimately fixes the problem for topics like these where the territory of possibilities has been scoured pretty thoroughly, but for what it’s worth it seems like a more common approach.
1
Alexander Herwix 🔸
Yeah, I think the intuitions it pumps really depends on the perspective and mindset of the reader. For me, it was triggering my desire to exhibit comradery and friendship in the last moments of life. I could also adjust the thought experiment so that nobody is hurt and simply ask whether one of them should take the morphine or whether they should die "being there for each other". I really do believe that we are kidding ourselves when we say that we only value "welfare" narrowly construed. But I get that some people may just look at such situations with a different mindset and, thus, different intuitions are triggered.  Regarding your approach, I think the important thing to keep in mind is that "the map is not the territory." "Theories are not truth." "Every model is wrong but some are useful, some of the time." Thus, there is not necessarily a need to "update" theories with every challenge one encounters but it is still important to stay mindful of the limitations a given theory has and consider alternative viewpoints to ensure that one doesn't run around with huge blind spots. Moral uncertainty can help here to some degree but acknowledging that we simply value more things than welfare maximization seems also an important step to guard against oversimplification. Interestingly, Spencer Greenberg made a related (much more eloquent) post today.
2
Devin Kalish
I endorse moral uncertainty, but I think one should be careful in treating moral theories like vague, useful models of some feature of the world. I am not a utilitarian because I think there is some "ethics" out there in the world, and being utilitarian approximates it in many situations, I think the theory is the ethics, and if it isn't, the theory is wrong. What I take myself to be debating when I debate ethics isn't which model "works" the best, but rather which one is actually what I mean by "ethics".
1
Alexander Herwix 🔸
This position seems confusing to me. So, either (1) ethics is something "out there", which we can try to learn about and uncover. Then, we would tend to treat all our theories and models as approximations to some degree because similar issues as in science apply. Or (2) we take ethics as something which we define in some way to suit some of our own goals. Then, it's pretty arbitrary what models we come up with, whether they make sense depends mainly on the goals we have in mind.  This kind of mirrors the question whether a moral theory is to be taken as a  standard for judging ethics (1) or a definition of ethics (2). Even if you opt for (2) the moral theory is still an instrument that should be treated as useful means to an end-in-view. You want the definition to be convincing by demonstrating that it can actually get you somewhere that is desirable. Thus, it would be appropriate to acknowledge what this definition can and cannot do so that people can make appropriate use of it. Whatever road you chose you still come to the point where you need to debate which model "works" best. That's the beauty of philosophical and ethical discourse.   And turning back to the question of value monism, I think Spencer Greenberg has some interesting discussion for people who are moral anti-realists (people who fall in camp 2 above) and utilitarians. Maybe that's worth checking out.  
2
Devin Kalish
Because my draft response was getting too long, I’m going to put it as a list of relevant arguments/points, rather than the conventional format, hopefully not much is lost in the process: -Ethics does take things out there in the world as its subjects, but I don’t take the comparison to empirical science in this case to work, because the methods of inquiry are more about discourse than empirical study. Empirical study comes at the point of implementation, not philosophy. The strong version of this point is rather controversial but I do endorse it, I will return to it in a couple bullets to expand it out -Even in empirical sciences, the idea of theories just being rough models is not always relevant. it comes from both uncertainty and the positive view that the actual real answer is far too complicated to exactly model. This is the difference between say economics and physics – theories in both will be tentative, and accept that they are probably just approximations right now because of uncertainty, but in economics this is not just a matter of historical humility, but also a positive belief about complexity in the world. Physics theories are both ways of getting good-enough-for-now answers, and positive proposals for ways some aspect of reality might actually be. Typically with plurality but not majority credence. -Fully defining what I mean by ethics is difficult, and of less interest to me than doing the ethics. Maybe this seems a bit strange if you think defining ethics is of supreme importance to doing it, but my feeling of disconnect between the two is probably part of why I’m an anti-realist. I’m not sure there’s any definition I could plug into a machine to make an ethics-o-meter I would simply be satisfied taking its word for it on an answer (this is where the stronger version of bullet one comes in). This is sort of related to Brian Tomasik’s point that if moral realism were true, and it turned out that the true ethics was just torturing as many squirrel
1
Alexander Herwix 🔸
Hey Devin,  first of all, thanks for engaging and the offer in the end. If you want to continue the discussion feel free to reach out via PM.  I think there is some confusion about my and also Spencer Greenberg's position. Afaik, we are both moral anti-realists and not suggesting that moral realism is a tenable position. Without presuming to know much about Spencer, I have taken his stance in the post to be that he did not want to "argue" with realists in that post because even though he rejects their position, it requires a different type of argument than what he was after for that post. He wanted to draw attention to the fact that moral anti-realism and utilitarian value monism doesn't necessarily and "naturally" go well together. Many of the statements he heard from people in the EA community were confusing to him not because anti-realism is confusing but being anti-realist and steadfastly holding on to value monism was, given that we empirically seem to value many more things than just one "super value" such as "welfare" and that there is no inherent obligation that we "should" only value one "super value". He elaborates that also in another post. My point was also mainly to point out that we should see moral theories as instruments that can help us get us more of what we value. They can help us reach some end-in-view and be evaluated in this regard, anything else is specious.  From my perspective, adopting classic utilitarianism can be very limiting because it can oversimplify and obscure what we actually care about in a given situation. It's maybe useful as a helpful guide for considering what should be important but I am trying to not delude myself that "welfare" must be the only thing I should care about. This would be akin to a premature closure of inquiry into the specific situation at hand. I cannot and will never be able to fully anticipate all relevant details and aspects of a real world situation, so how can I be a priori certain that there is only

I don’t think I find this a particularly difficult dilemma or a compelling objection to value monism. If everything is as you stipulate, then Bob should definitely take the morphine. If I were in Alice’s position, I would hope that I wouldn’t try to deprive Bob of such a special experience in order to experience a bit less pain.

Comments5
Sorted by Click to highlight new comments since:

This is the kind of scenario where something that would typically be welfare maximising (and right according to commonsense morality) is actually not welfare maximising and is wrong according to commonsense morality. i.e. typically people who are greatly in need of pain medication are the people who would benefit most from pain medication; typically you shouldn't give strong pain medication to people with no medical need of it; typically there are flow-through effects to consider like addiction, upholding norms, social relations and moral character (because the world isn't ending); typically you don't have futuristic super-computers giving you extremely high confidence that the typically-wrong thing is actually welfare maximising.

In this kind of scenario, I think it makes sense that intuitively one would think that it's right to do the typically-right-but-by-stipulation-not-welfare-maximising thing, but that one has reasonable (though not conclusive) grounds for just biting the bullet and saying that you should do the highly unusual welfare maximising thing.

It's also not clear that one couldn't, in principle, account for the choice to give the medicine to Alice as a value monist, e.g. if you only care about weighted welfare (weighting more negative states more heavily).

I agree with your general thrust. The thought experiment is a little bit contrived but deliberately designed to make both options look somewhat plausible. A value monist negative utilitarian could also give the medicine to Alice, so it's not even clear what option one would go for.
 
However, what I really wonder though is if "welfare" is the only thing we care about at the end of times? Or is there maybe also the question of how we got there? How we handled ourselves in difficult situations? What values we embodied when we were alive? Are we not at risk of losing our humanity if we subordinate all of our behavior to a "principled" but "acontextual" value monist algorithm (e.g., always maximize "expected welfare")? These are the kind of questions that I want to trigger reflection about with the thought experiment.

Value monism is the thesis that there is only one intrinsically valuable property.

(I didn't know this term)

I changed the title of the question and made some small changes to the text to make clearer what I am after with this. I would like to encourage reflection on the part of the value monist utilitarians in this forum. There may be instrumentally good reasons to use value monist utilitarian theories for some purposes but we should be open-minded and forthright in acknowledging its limitations and not take it as a "moral theory of everything". Let's not mistake the map for the territory!

I do not think it has any compelling limitations.

Curated and popular this week
Relevant opportunities