This post condenses part of a book chapter, “The Alienation Objection to Consequentialism” (fc. in the Oxford Handbook of Consequentialism), which I coauthored with lead author Barry Maguire. The preprint, penultimate version of the chapter can be found here. All mistakes in this post are entirely my own.
I. Introduction
Most EAs are either consequentialists or lean consequentialist [1]. However, (my impression is that) many EAs also practice Bayesian epistemology and/or strive to take seriously various forms of uncertainty, including moral uncertainty and epistemic uncertainty arising from peer disagreement. In this post, I discuss an objection to consequentialism—the alienation objection—that has not, to my knowledge, received as much airtime within the EA community as have other objections, such as the over-demandingness objection and the cluelessness objection. I do not think the alienation objection is a knock-down objection to consequentialism. But I do think that, all-things-considered, it should lower our credence in consequentialism.
The post is structured as follows: I begin section II with an example to intuitively motivate the alienation concern (II.A). I then give a rough conceptual sketch of alienation (II.B). Section II ends by stating the alienation objection (II.C). Section III considers the prospects for avoiding alienation of two popular forms of consequentialism—Global Consequentialism (III.A) and Leveled Consequentialism (III.B)—that go beyond simple Act Consequentialism. Section IV concludes with some takeaways.
II. What is Alienation?
A. An Example
Say you accept Act Consequentialism, according to which (very roughly) you should do whatever has the best consequences, i.e., whatever produces the most value in the world. Say further that, one Sunday afternoon, the action with the best consequences is to go support a friend who’s having a tough mental health day. Since you accept Act Consequentialism, you go support your friend. And in taking this action, you’re motivated by the fact that supporting your friend has the best consequences relative to everything else you could have done.
One thought to have about this case is that you have the wrong motivation in visiting your friend. Plausibly, your motive should be something like ‘my friend is suffering; I want to help them feel better!’ and not ‘helping my friend has better consequences than anything else I could have done.’ Imagine what it would be like to frankly admit to your friend, “I’m only here because being here had the best consequences. If painting a landscape would have led to better consequences, I would have stayed home and painted instead.” Your friend would probably experience this remark as cold, or at least overly abstract and aloof. They might have hoped that you’d visit them because you care about them and about your relationship with them, not because their plight happened to offer you your best opportunity for doing good that afternoon. To put things another way, we might say that your motivation alienates you from your friend.
B. The Concept of Alienation
The example of visiting your friend (hopefully) gives some vague intuitive sense of alienation. But what, in general, is alienation? As I’ll understand it in this post, alienation arises when we are inhibited from participating in an important type of normative ideal. Rather than trying to say exactly what a normative ideal is, I’ll give some examples. On the interpersonal level, there are ideals of being a good friend, a loving spouse, and a nurturing parent. On the intrapersonal level, there are certain psychological ideals, such as having coherence among our commitments, beliefs, and motives (more on this below). And on a more macroscopic level, there are ideals of being a virtuous citizen and being appropriately connected to the natural world.
As these examples may suggest, the type of ideal I have in mind involves some sort of harmony, closeness, or connection between things when it is realised. Such harmony might exist, for example, between two people in the case of friendship; or among different elements in a person’s psychology; or between someone and the natural world; or perhaps even among different social classes in a just society (insofar as the just society features class distinctions).
Another aspect that emerges when we reflect on these ideals is that many of them call for multifocal fittingness: for us to be appropriately oriented towards whatever it is (e.g. a person, a group, or a project) in our actions, motives, affects, ways of thinking, and level of commitment, and to be normatively integrated in this way both at discrete points in time and across time. To give a concrete example, friendship involves a holistic and distinctive way of engaging with our friends that plays out in our actions, motives, emotions, and commitments. We have fun hanging out with our friends and support them when they’re down; we’re motivated to spend time with our friends and to help them simply out of friendship; we feel joyful when something good happens to our friends; we don’t abandon our friends unless serious countervailing moral reasons to do so arise; etc. Alienation is just the state of being inhibited from participating in such a normative ideal. To take one last example, a parent may be alienated from their child if they adopt an extremely formal and disciplinary style of parenting that inhibits them from being affectionate towards their child.
C. Consequentialism and Alienation: The Basic Argument
In brief, the alienation objection is that accepting consequentialism inhibits us from participating in key normative ideals.
Of course, this is only a problem for consequentialism if these ideals are actually ethically authoritative (i.e., if they provide some fundamental normative input into our answer to the question, ‘how should I live my life, all-things-considered?’). I won’t provide a positive argument for the conclusion that there is, in fact, at least one ethically authoritative normative ideal. And I freely admit that if there are no such ideals, the alienation objection to consequentialism gets nowhere.
Why should anyone care about the alienation objection, then? Here are two reasons.
First, it is intuitively plausible that some normative ideals are authoritative. For example, the relational state of being in a committed romantic relationship seems to generate internal norms of its own and thereby to furnish us with reasons to behave in certain holistic ways (again, the ideal involves our motives, thoughts, emotions, actions, etc. as they relate to our significant other). The plausibility that some ideals are authoritative warrants an investigation into whether consequentialism can accommodate them. If you have a non-zero credence that some normative ideals are authoritative, then it’s relevant to know whether consequentialism is compatible with these ideals.
Second, a methodological point: given the intuitive plausibility of their ethical authority, certain normative ideals such as deep friendship constitute inputs into the process of thinking about ethics. (This essentially means they are defeasible starting points.) Consequentialism does not have this epistemic status: it is not an input into ethical theory; rather it purports to be the final ethical theory. If accepting consequentialism is incompatible with participating in key normative ideals, then, barring a compelling argument that no normative ideals are authoritative, it is consequentialism, and not the ideals, that needs to be revised.
III. Two Consequentialist Strategies [2]
A. Global Consequentialism
Act Consequentialism says we should perform whichever action has the best consequences, i.e., whichever action does the most on-balance good. Said differently, Act Consequentialism applies a direct consequentialist assessment to actions. Very roughly, Global Consequentialism applies a direct consequentialist assessment to everything we can evaluate, rather than just to actions. So, according to Global Consequentialism, in addition to performing the actions with the best consequences, we should also have whichever motives have the best consequences, whichever character traits have the best consequences, whichever roof colour has the best consequences, etc. Here are three problems for Global Consequentialism:
Problem #1: Global Consequentialism conflicts with partiality, but key normative ideals require partiality.
Both interpersonal and intrapersonal non-alienation often require some degree of partiality. For instance, fully participating in the (interpersonal) ideal of friendship involves devoting some non-trivial amount of resources (e.g., time, caring attention, and money) to our friendships. Similarly, fully participating in the (intrapersonal) ideal of pursuing a ground project—i.e., a project that gives shape and meaning to our life, such as developing a lifelong passion for amateur photography (as did Derek Parfit)—involves non-trivial resource expenditures. Presumably, however, these resources can often do more good if we allocate them impartially, e.g. if we donate our money to effective charities rather than spending it hanging out with friends or pursuing ground projects. Since we could do more good by allocating our resources impartially, Global Consequentialism requires us to do so (to the neglect of our friendships and ground projects).
Problem #2: Global Consequentialism runs into a dilemma regarding our moral beliefs and motives.
In this subsection we’ll explore a dilemma for Global Consequentialism. I’ll start by briefly stating the dilemma and then go on to flesh it out. Here is the dilemma: Global Consequentialism either (a) allows us to have a “friendly,” non-alienated motive but, in doing so, sacrifices an intuitive fit between our moral beliefs and motives or (b) retains the intuitive fit between moral beliefs and motives but, in doing so, requires us to have an alienating motive.
We can begin to make sense of this dilemma by recalling a thought we had earlier: having a consequentialist reason as our motive can be alienating. In the case of the Act Consequentialist supporting their friend, we saw that there was something wrong with the explicitly consequentialist motive, ‘supporting my friend produces the most value out of anything I could be doing right now.’
Global Consequentialism can avoid this problem. Recall, Global Consequentialism says you should have whichever motive produces the most value. Plausibly, having a “friendly” motive—‘my friend is suffering; I want to help them feel better!’—produces more value than having the explicitly consequentialist motive. So, according to Global Consequentialism, you should have the friendly motive when you go to support your friend. And if you have the friendly motive, you are not alienated from your friend. So far so good.
If the Global Consequentialist adopts this strategy, though, they run into a new problem: they will not be motivated by their own moral beliefs. This is a problem because, plausibly, it is fitting to be motivated by our moral beliefs. To provide some intuitive support for this claim, say you have the moral belief that we should keep our promises (unless keeping them would cause someone serious undue harm). Say further that you keep a promise to help a new acquaintance move a couch into their apartment. In this scenario, it is fitting for part of your motivation to be something like, ‘well, I promised I’d help them move the couch today, so I’d better head on over and help them out.’ It would be strange if the fact that you promised to help move the couch did not show up in your motivation to help move the couch, even though you believe you should keep your promises.
Say we accept that it’s fitting to be motivated by our moral beliefs. The problem for the Global Consequentialist is that if they have a friendly motive, they are not being motivated by their moral beliefs. To see this, return to the case of supporting your friend who’s having a tough mental health day. In this case, the Global Consequentialist’s moral belief is that they should support their friend because doing so has the best consequences. If the Global Consequentialist were motivated by this moral belief, their motivation would be, ‘helping my friend has better consequences than anything else I could have done.’ As we said before, this motive is alienating.
However, I suggested above that the Global Consequentialist should, by their own lights, have the friendly motive in supporting their friend (because having the friendly motive produces more value than having the consequentialist motive). But the friendly motive is simply, ‘my friend is suffering; I want to help them feel better!’. There is nothing about producing the most value or having the best consequences in this motive. So the Global Consequentialist is not motivated by their own moral belief, which is that they should visit their friend in order to produce the most value. Rather, the Global Consequentialist is directly motivated out of concern for their friend. Being motivated in this way avoids alienation from their friend but sacrifices the intuitive fit between moral beliefs and motives.
Problem #3: Global Consequentialism has trouble making sense of another intuitive fit, this one between our motives and commitments.
Problem #2 implicitly endorsed a psychological ideal that involves harmony between our moral beliefs and motives. Global Consequentialism seems unable to accommodate this ideal. Let’s now consider the relationship between our motives and commitments. I suggest that having a bona fide commitment to something—whether that is to another person, or to a skill like playing the violin, or to a ground project of figuring out how to minimise the suffering of nonhuman animals on factory farms—makes it fitting to have certain motives. In particular, if you’re committed to something, it’s fitting to be motivated to spend time and effort positively engaging with it, whether that’s by spending quality time with the other person, or by practicing the violin, or by doing research on effective factory farming interventions.
According to Global Consequentialism, however, the fact that we’re committed to something has no direct relevance to whether we should have the motives that fit with our commitment. Rather, on Global Consequentialism, we should simply have whichever commitments maximise value and whichever motives maximise value. If it happens to work out that having motives which fit with our commitments maximises value, great, but if not, tough luck.
Here are two problems with this Global Consequentialist response. First, even when Global Consequentialism does tell us to have the motives that fit with our commitments, it does so for the wrong reason. Say you love your mom and are committed to maintaining a deep familial relationship with her and caring for her as she grows older. And say one day you are motivated by your love and care to spend an afternoon with your mom. In this case, you need and should have no further reason or explanation for this motivation other than something like, ‘I love and care about my mom, so I want to spend time with her.’ Put another way, your motive flows directly from your commitment. In contrast, Global Consequentialism says that the ultimate reason/justification for your caring motive is that having it maximises value. (An important comment about ethical theory: it’s not enough for a theory to give us the right answers. It needs to give us the right answers for the right reasons.)
The second problem is that accepting Global Consequentialism seems to undermine our even having the commitment in the first place. If we accept Global Consequentialism, we must accept that our commitments have no direct motivational significance. (Again, whether or not you should have a motive is, according to Global Consequentialism, solely a function of whether having it would produce the most value.) This is in tension with the plausible thought that being directly motivated in certain ways by what we’re committed to is just part of what it is to be committed to something.
B. Leveled Consequentialism
Leveled Consequentialism is centrally concerned with higher-level psychological states like commitments, values, identities/roles, internalised decision procedures, character traits, and dispositions. These states are “higher-level” in the sense that they are relatively robust and persistent—we can’t just choose to have a different character trait in the way that we can choose to take, or not to take, an action. They are also higher-level because they generate and coordinate lower-level things like thoughts, emotions, motives, and actions. To return to an example from earlier, if you’re genuinely committed to something, you’ll tend to think about that thing in certain ways, feel joy/excitement when good things happen with/to that thing, be motivated to constructively engage with it, etc.
Leveled Consequentialism says we should have whichever higher-level psychological states it would be best to have. I’m going to focus on indirect versions of Leveled Consequentialism, according to which we should have a higher-level state iff and because it’s value-maximising to have it, but it’s permissible to have non-value-maximising thoughts, emotions, and motives and to perform non-value-maximising actions if and because they “come from” a value-maximising higher-level state. For example: if it’s a core part of your identity to be a loyal friend, and this facet of your identity is approved on consequentialist grounds, then you are permitted to buy a plane ticket to go to your friend’s wedding, even though you could have done more good by declining the wedding invitation, donating the money you would have spent on the ticket, and not contributing so much to fossil fuel emissions by abstaining from air travel.
One of the main motivations for this indirect approach is the recognition that realising some of the most important goods in life, like steadfast friendship, romantic love, spontaneity, and highly-skilled flow states, requires (among other things) an integrated motivational, cognitive, and affective engagement with various things just for their own sakes, free from consequentialist calculation. (In our jargon above, this type of holistic “for-its-own-sake” engagement is (partially) constitutive of one important type of normative ideal.) The hope of indirect Leveled Consequentialism is to (i) orient our lives around promoting the good by selecting and cultivating our higher-level states in accordance with consequentialist reasoning while (ii) retaining the ability to realise key life goods such as friendship, love, spontaneity, and flow by “screening off” (a good deal of) our everyday thoughts, emotions, motives, and actions from consequentialist calculation.
The Leveled Consequentialist approach sounds promising. One worry we might have about it, however, is that the contingency of our higher-level states on their being value-maximising is itself alienating (at least in some cases). Imagine being friends with someone just because being their friend is value-maximising. Something is intuitively wrong with this. Specifically, even if the relegation of consequentialism to a higher level in your psychology successfully “screens off” most of your first-order thoughts, emotions, motives, and actions regarding this friend, such that you usually interact with them just like a regular friend would, you are prepared to end the friendship upon judging that doing so would maximise value. Or to flip the situation around, imagine how you might feel if one of your friends, or worse, your significant other admitted to you, “I’m only your friend/SO because it maximises impartial value.” Upon hearing this, you might rightfully begin to question your relationship with this person.
Of course, it is open to the Leveled Consequentialist to simply add another level in cases like this. They might reasonably say, “no, you have it wrong; it’s not individual relationships that must stand the test of consequentialist assessment, but the higher-level structures that coordinate and support individual relationships (e.g., a generally affable disposition to form close interpersonal connections, or an internalised policy to maintain a robust friendship circle for its own sake). If it’s value-maximising to have such a disposition or policy, you’re good to go; you never need to submit your individual relationships to consequentialist assessment!”
This proposal seems like it might indeed avoid alienation. Note, though, that it avoids alienation only by relegating consequentialism to quite a lofty and hands-off level. On this proposal, a vast swath of our cognitive, affective, intentional, motivational, and practical life will be governed by our commitments (e.g. to being the world’s best dad), identities or roles (e.g. a role as a teacher and the core part this role plays in our identity), internalised decision procedures (e.g. ‘be honest unless doing so would cause someone serious undue harm’), character traits (e.g. being a warm, loyal friend), etc. And remember that, according to indirect Leveled Consequentialism, thought patterns, motives, intentions, affects, and actions that “come from” our higher-level states are justified by their relationship to these higher-level states. They do not need to meet the test of consequentialist assessment. The result is that a great deal of our ethical life—our way of being in the world, of relating to ourselves and others—comes to be governed and justified in a non-consequentialist manner.
One can insist that the ethical theory we’ve arrived at is nonetheless consequentialist because our higher-level (or highest-level) states are cultivated and justified on consequentialist grounds. But whether we want to call the resulting theory consequentialist, non-consequentialist, half-consequentialist, or something else seems to me like a less interesting, merely linguistic dispute rather than a substantive philosophical disagreement. The substantive conclusion we’ve reached is that to avoid alienation and secure a range of life’s most important goods, consequentialism must ascend to a higher, more abstract, and less hands-on level of governance in our ethical lives, ceding much practical authority to a variety of non-consequentialist principles and modes of engagement with the world, other beings, and ourselves.
This completes the paper summary. What follows are my own thoughts.
Conclusion
What follows if some normative ideals are ethically authoritative and/or if some of life’s highest goods require non-consequentialist modes of engagement? Before offering some incomplete thoughts on this question, I want to be clear that I fully support the things we in the EA community canonically advocate, such as going reducetarian/vegetarian/vegan, donating more and to the most effective organisations, choosing careers based on careful impact assessment, and promoting cause neutrality. Although I find the considerations around alienation and normative ideals compelling, I also think we have strong ethical reasons to be altruistic—indeed, to be more altruistic than commonsense morality suggests—and that when we set out to be altruistic, we should do so as effectively as possible.
With this caveat in place, I’ll turn to offering some closing thoughts on the upshots of the alienation objection. First, at the theoretical level: I think the objection should decrease our credence in consequentialism. Your posterior will, of course, be a function of (at least) your prior in consequentialism, how compelling you find the objection, and how much epistemic weight you give to interpersonal disagreement. But since one’s credences in different ethical theories constitute an important input into the process of handling moral uncertainty—and hence, I think, into leading an ethical life, which must account for moral uncertainty—the upshot that alienation concerns should update our credence in consequentialism is important.
Second, at the practical level: throughout this post I have stressed that key normative ideals, like that of deep friendship, are constituted by a holistically integrated, multifocal type of extended engagement with sources of value. Deep friendship is constituted by a complex of thought patterns, emotions, intentions, commitments, activities, and internal norms that cohere together in the right way and arise or are done simply out of friendship: for nothing other than the sake of the other person and your relationship with them. (Reflecting on goods like friendship and love highlights the fact that ethics touches our entire lives, not just our actions). And yet, as I hope to have shown, internalising a consequentialist ethic can inhibit our full participation in these key normative ideals and thereby vitiate some of life’s highest goods. And even if we are able to screen off our first-order responses by pushing consequentialism into the background or into a higher-level, organising role in our lives, consequentialism often gives us the wrong reasons for having certain motives, taking certain actions, etc., and thereby fails as an ethical theory just as much as it would in giving the wrong prescriptions.
It’s ok to hang out with your friends, kiss your significant other, and pursue a hobby that brings you joy; to be motivated to do these things, intend to do them, and feel good about doing them; and for none of these things to be dependent on, motivated by, or justified by the fact that doing or having them maximises value, or the fact that they arise from a higher-level psychological state that maximises value, or any other consequentialist reason. We can hold—as I wholeheartedly do—that we should devote a substantive portion of our resources and life’s work to altruism, and that we should put exquisite care into doing so effectively, without subjecting the totality of our existence to consequentialist assessment or accepting that consequentialism, by itself, can account for the totality of our ethical lives.
Notes
[1] In the 2019 EA Survey, 80.7% of EAs identified with consequentialism.
[2] In the full paper we discuss two other classes of consequentialist-inspired ethical theory—Hybrid Theories and Relative Value Theories—that I’ve omitted from this post for the sake of brevity.
Can't this example be generalized and used against any ethical theory that values more than just that one friend and their wellbeing, which is basically every plausible ethical theory? You have to weigh reasons against one another, so all theories could be framed to respond like "I’m only here because I had the most all-things-considered reason to be here. If I had more all-things-considered reason to paint a landscape, I would have stayed home and painted instead."
Impartial consequentialist theories weigh reasons in particular ways, and, as you point out, don't recognize certain ideals like friendship terminally that we perhaps should, which is what alienation is about (although your friend's welfare is a consideration!).
I guess this is more of a response to that particular example and its framing, not to say that impartial consequentialism isn't more alienating than other theories.
Are there better reasons to value a relationship than because it allows you or the other to do more good or be a better person? This seems like it could be the best reason to value a relationship, because it's unselfish. And it doesn't seem that alienating to me.
I suppose the point is that there shouldn't be a reason, and we should just value the relationship in itself. But then we're left with taking that as an axiom without justification (or else that justification would be a reason). And are we sure we aren't just being selfish or trying to justify selfishness by giving relationships special status to avoid much more demanding moral obligations?
Glad the alienation objection is getting some airtime in EA. I wanted to add two very brief notes in defense of consequentialism:
1) The alienation objection seems generalizable beyond consequentialism to any moral theory which (as you put it) inhibits you from participating in a normative ideal. I am not too familiar with other moral traditions, but it is possible for me to see how following certain deontological or contractualist theories too far also results in a kind of alienation. (Virtue ethics may be the safest here!)
2) The normative ideals that deal with interpersonal relationships are, as you mentioned, not the only normative ideals on offer. And while the ones that deal with interpersonal relationships may deserve a special weight, it’s still not clear how to weigh them relative to other normative ideals. Some of these other normative ideals may actually be bolstered by updating more in favor of following some kind of consequentialism. For example, consider the below quote from Alienation, Consequentialism, and the Demands of Morality by Peter Railton, which deeply resonated with me when I first read it:
Thanks for this!
It feels to me that the objection has some force, but might actually mean little in terms of what we should do. This is not a criticism of the objection, just a remark I want to expand on.
For example, if you're going to visit a friend to comfort them, and you come across a child drowning in a pond, and saving the child would prevent you from comforting your friend, you should still save the child, at least in my view. Our positive moral obligations to others, strangers or not, humans or not, seem so great in the real world today that they should usually beat friendship considerations, so that "my friend is suffering; I want to help them feel better!" is actually just not a good enough reason on its own in the real world because the stakes are so high. We are in triage every second of every day.
A few other thoughts:
My own view is that the virtue- and obligation-based reasons for entering or ending a relationship are usually far better than the selfish ones (except maybe if that person seriously mistreats you). Couples often remark on how they are better people with each other. "better person" still makes sense on a virtue consequentialist account, although it faces the objections you bring up in this post. Furthermore, in ending a relationship, you may be failing to meet some important obligations to that person that you didn't have before you started the relationship, and maybe these should hold you back more than impartial consequentialism does.
This doesn't resonate with me at all, personally. What exactly could be a purer, warmer motivation for helping a friend than the belief that helping them is the best thing you could be doing with your time? That belief implies their well-being is very important; it's not just an abstract consequence, their suffering really exists and by helping them you are choosing to relieve it.
That they're more important to you than impartial concern allows?
Why does Act Consequentialism imply impartiality?
The definition used here ("according to which (very roughly) you should do whatever has the best consequences, i.e., whatever produces the most value in the world") punts all the complexity into the definition of "value in the world", but that is entirely subjective and can be completely partial, as it is for many if not most people.
It seems this entire discussion is suffering from the confusion of Act Consequentialism with something more specific and impartial like a version of Utilitarianism. Or at the very least an underdefined use of terms like "value in the world".
I think we're taking impartiality for granted here. Consequentialism doesn't imply impartiality.
Then that's begging the question. The Alienation Objection isn't to Act Consequentialism at all, but to taking impartiality for granted.
I'm still confused by this. The more impartial someone's standards, if anything, the more important you should feel if they still choose to prioritize you.
It's more circumstantial if they prioritize you based on impartial concern; it just happened to be the best thing they could do.
Also, for an impartial consequentialist, I think "the belief that helping them is the best thing you could be doing with your time" won't normally be based primarily on their welfare, because that's pretty small compared to the impartial stakes we face. So, most of the reason comes from instrumental reasons, e.g. helping your friend because it does more good for others besides your friend, or because the seemingly better alternatives aren't actually sustainable in the long term, or you're actually doing something wrong by helping your friend instead of doing something else.
So, for an impartial consequentialist, you shouldn't normally help a friend primarily for their own sake. You can't say "I did this primarily out of my concern for you." without lying (actually the instrumental reasons are more important) or failing in your impartial obligations to others. Concern for them is part of it, but it isn't enough to beat your other obligations.
Hm, to my ear, prioritizing a friend just because you happen to be biased towards them is more circumstantial. It's based on accidents of geography and life events that led you to be friends with that person to a greater degree than with other people you've never met.
I agree, though that's a separate argument. I was addressing the claim that conditional on a consequentialist choosing to help their friend, their reasons are alienating, which I don't find convincing. My point was precisely that because the standard is so high for a consequentialist, it's all the more flattering if your friend prioritizes you in light of that standard. It's quite difficult to reconcile with my revealed priorities as someone who definitely doesn't live up to my own consequentialism, yes, but I bite the bullet that this is really just a failure on my part (or, as you mention, the "instrumental" reasons to be a good friend also win over anyway).
That's a good point. I think one plausible-sounding response is that while the friendship itself was started largely circumstantially, the reason you maintain and continue to value the relationship is not so circumstantial, and has more to do with your actual relationship with that other person.
If you do think it is a failure on your part, then belief that it's the best thing you could be doing isn't the reason, and isn't one reason actually special concern for your friend or your relationship with them? I suppose the point is that you don't recognize that reason as an ethical one; it's just something that happens to explain your behaviour in practice, not what you think is right.
Right, but even so it seems like a friend who cares for you because they believe caring for you is good, and better than the alternatives, is "warmer" than one who doesn't think this but merely follows some partiality (or again, bias) toward you.
I suppose it comes down to conflicting intuitions on something like "unconditional love." Several people, not just hardcore consequentialists, find that concept hollow and cheap, because loving someone unconditionally implies you don't really care who they are, in any sense other than the physical continuity of their identity. Conditional love identifies the aspects of the person actually worth loving, and that seems more genuine to me, though less comforting to someone who wants (selfishly) to be loved no matter what they do.
Yeah, exactly. It would be an extremely convenient coincidence if our feelings for partial friendship etc., which evolved in small communities where these feelings were largely sufficient for social cohesion, just happened to be the ethically best things for us to follow - when we now live in a world where it's feasible for someone to do a lot more good by being impartial.
Edit: seems based on one of your other comments that we actually agree more than I thought.
Seems like you're trying to get at what I've seen referred to as 'multifinal means' at one point. Keyword might help find related stuff.
This is sort of tangential, but related to the idea of making the distinction between inputs and outputs in running certain decision processes. I now view both consequentialism and deontological theories to be examples of what I've been calling perverse monisms. A perverse monism is when there is a strong desire to collapse all the complexity in a domain into a single term. This is usually achieved via aether variables, we rearrange the model until the complexity (or uncertainty) has been shoved into a corner either implicitly or explicitly, which makes the rest of the model look very tidy indeed.
With consequentialism we say that one should allow the inputs to vary freely while holding the outputs fixed (our idea of what the outcome should be, or heuristics that evaluate outcomes etc.). We backprop the appropriate inputs from the outputs. Deontology says we can't control outputs, but we can control inputs, so we should allow outputs to vary freely while holding the inputs to some fixed ideal.
Both of these are a hope that one can avoid the nebulosity of having a full blown confusion matrix about inputs and outputs, and that changing problem to problem. That is to say, I have some control over which outputs to optimize for, and some control over inputs, and false positives and false negatives in my beliefs about both of those. Actual problem solving of any complexity at all both forward chains from known info about inputs, and backchains from previous data about outputs then tries to find places where the two branching chains meet. In the process of investigating this, beliefs about the inputs or outputs may also update.
More generally, I've been getting a lot of mileage out of thinking of 'philosophical positions' as different sorts of error checks that we use on decision processes.
It's also fun to think about this in terms of the heuristic that How to Measure Anything recommends:
Define parameters explicitly (what outputs do we think we care about, what inputs do we think we control)
Establish value of information (how much will it cost to test various assumptions)
Uncertainty analysis (narrowing confidence bounds)
Sensitivity analysis (how much does final proxy vary as a function of changes in inputs)
it's a non linear heuristic, so the info gathered in any one step can cause you to go back and adjust one of the others, which involves that sort of bouncing back and forth between forward chaining and back chaining.