M

MichaelStJules

10547 karmaJoined

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2343

Topic contributions
12

Ya, I agree that many or even most people would get rid of some of their preferences if they could to be happier or more satisfied or whatever. Many people also have fears, anxieties or insecurities they'd rather not have, and those are kinds of "preferences" or "attitudes", the way I'm using those terms.

Life satisfaction is typically considered to be a kind of (or measure of) subjective well-being, and the argument would be the same for that as a special case. Just make the number go up enough after taking the pill, while replacing what they care about. (And I'm using subjective well-being even more broadly than I think normally used.)

For example, I wonder if people who have preferences that are hard to satisfy might actually want to take such a life-satisfaction pill, if it meant their new preferences were easier to satisfy.

In my view, it only makes sense to do if they already have or were going to otherwise have preferences/attitudes that would be more satisfied by taking the pill. If they would suffer less by taking the pill, then it could make sense. If they prefer to have greater life satisfaction per se, then it can make sense to take the pill.

I agree that some instances of replacement seem good, but I suspect the ones I'd agree with are only good in (asymmetric) preference-affecting ways. On the specific cases you mention:

  • Generational turnover
    • I'd be inclined against it unless
      • it's actually on the whole preferred (e.g. aggregating attitudes) by the people being replaced, or
      • the future generations would have lesser regrets or negative attitudes towards aspects of their own lives or suffering (per year, say). Pummer (2024) resolves some non-identity cases this way, while avoiding antinatalism (although I am fairly sympathetic to antinatalism).
  • not blindly marrying the first person you fall in love with
    • people typically (almost always?) care or will care about their own well-being per se in some way, and blindly marrying the first person you fall in love with is risky for that
    • more generally, a bad marriage can be counterproductive for most of what you care or will care to achieve
    • future negative attitudes (e.g. suffering) from the marriage or for things to be different can count against it
  • helping children to develop new interests:
    • they do or will care about their well-being per se, and developing interests benefits that
    • developing interests can have instrumental value for other attitudes they hold or are likely to eventually hold either way, e.g. having common interests with others, making friends, not being bored
    • developing new interests is often (usually? almost always?) a case of discovering dispositional attitudes they already have or would have had anyway. For example, there's already a fact of the matter, based in a child's brain as it already is or will be either way, whether they would enjoy certain aspects of some activity.[1] So, we can just count unknown dispositional attitudes on preference-affecting views. I'm sympathetic to counting dispositional attitudes anyway for various reasons, and whether or not they're known doesn't seem very morally significant in itself.
  1. ^

    Plus, the things that get reinforced, and so may shift some of their attitudes, typically get reinforced because of such dispositional attitudes: we come to desire the things we're already disposed to enjoy, with the experienced pleasure reinforcing our desires.

Good point about the degree of identity loss.

I think the hybrid view you discuss is in fact compatible with some versions of actualism (e.g. weak actualism), as entirely preference-affecting views (although maybe not exactly in the informal way I describe them in this post), so not necessarily hybrid in the way I meant it here.

Take the two outcomes of your example, assuming everyone would be well-off as long as they live, and Bob would rather continue to live than be replaced:

  1. Bob continues to live.
  2. Bob dies and Sally is born.

From the aggregated preferences or attitudes of the people in 1, 1 is best. From the aggregated preferences or attitudes of the people in 2, 2 is best. So each outcome is best for the (would-be) actual people in it. So, not all preference-affecting views even count against this kind of replaceability.

My next two pieces will mostly deal with actualist(-ish) views, because I think they're best at taking on the attitudes that matter and treating them the right way, or being radically empathetic.

If it's better for the extended EA community and our efforts to do good, it's plausibly better for the world, which I assume such a person would care about. That’s what would be in it for them.

Maybe they don't think the balance of benefits and risks/downsides and costs (including opportunity costs) is favourable, though.

Would you consider making retroactive grants? I saw that the LTFF did a few. If you did, how would you evaluate them differently from the usual grants for future work?

I'm personally interested in retroactive grants for cause prioritization research.

I suppose I'm more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there's any decision theory that is both stateable and clear on this point.

Couldn't the decision theory just do exactly the same, and follow the same procedures? It could also just be context-sensitive, vague and complex.

How do we draw the line between which parts are epistemic vs decision-theoretic here? Maybe it's kind of arbitrary? Maybe they can't be cleanly separated?

I'm inclined to say that when we're considering the stakes to decide what credences to use, then that's decision-theoretic, not epistemic, because it seems like motivated reasoning if epistemic. It just seems very wrong to me to say that an outcome is more likely just because it would be worse (or more important) if it happened. If instead under the epistemic approach, we're not saying it's actually more likely, it's just something we shouldn't round down in practical decision-making if morally significant enough, then why is this epistemic rather than decision-theoretic? This seems like a matter of deciding what to do with our credences, a decision procedure, and typically the domain of decision theory.

Maybe it's harder to defend something on decision-theoretic grounds if it leads to Dutch books or money pumps? The procedure would lead to the same results regardless of which parts we call epistemic or decision-theoretic, but we could avoid blaming the decision theory for the apparent failures of instrumental rationality. But I'm also not sold on actually acknowledging such money pump and Dutch book arguments as proof of failure of instrumental rationality at all. 

(Edited.)

The problem of arbitrariness has been pushed back from having no external standard for our rounding down value to having some arbitrariness about when that external standard applies. Some progress has been made.

It seems like we just moved the same problem to somewhere else? Let S be "that external standard" to which you refer. What external standard do we use to decide when S applies? It's hard to know if this is progress until/unless we can actually define and justify that additional external standard. Maybe we're heading off into a dead end, or it's just external standards all the way down.

Ultimately, if there's a precise number — like the threshold here — that looks arbitrary, eventually, we're going to have to rely on some precise and I'd guess arbitrary-seeming direct intuition about some number.

Second, the epistemic defense does not hold that the normative laws change at some arbitrary threshold, at least when it comes to first-order principles of rational decision.

Doesn't it still mean the normative laws — as epistemology is also normative — change at some arbitrary threshold? Seems like basically the same problem to me, and equally objectionable.

 

Likewise, at a first glance (and I'm neither an expert in decision theory nor epistemology), your other responses to the objections in your epistemic defense seem usable for decision-theoretic rounding down. One of your defenses of epistemic rounding down is stakes-sensitive, but then it doesn't seem so different from risk aversion, ambiguity aversion and their difference-making versions, which are decision-theoretic stances.

In particular

Suppose we adopt Moss’s account on which we are permitted to identify with any of the credences in our interval and that our reasons for picking a particular credence will be extra-evidential (pragmatic, ethical, etc.). In this case, we have strong reasons for accepting a higher credence for the purposes of action.

sounds like an explicit endorsement of motivated reasoning to me. What we believe, i.e. the credences we pick, about what will happen shouldn't depend on ethical considerations, i.e. our (ethical) preferences. If we're talking about picking credences from a set of imprecise credences to use in practice, then this seems to fall well under decision-theoretic procedures, like ambiguity aversion. So, such a procedure seems better justified to me as decision-theoretic.

Similarly, I don't see why this wouldn't be at least as plausible for decision theory:

Suppose you assign a probability of 0 to state s1 for a particular decision. Later, you are faced with a decision with a state s2 that your evidence says has a lower probability than s1 (even though we don’t know what their precise values are). In this context, you might want to un-zero s1 so as to compare the two states.

One response to these objections to rounding down is that similar objections could be raised against treating consciousness, pleasure, unpleasantness and desires sharply if it turns out to be vague whether some systems are capable of them. We wouldn't stop caring about consciousness, pleasure, unpleasantness or desires just because they turn out to be vague.

And one potential "fix" to avoid these objections is to just put a probability distribution over the threshold, and use something like a (non-fanatical) method for normative uncertainty like a moral parliament over the resulting views. Maybe the threshold is distributed uniformly over the interval .

Now, you might say that this is just a probability distribution over views to which the objections apply, so we can still just object to each view separately as before. However, someone could just consider the normative view that is (extensionally) equivalent to a moral parliament over the views across different thresholds. It's one view. If we take the interval to just be , then the view doesn't ignore important outcomes, it doesn't neglect decisions under any threshold, and the normative laws don't change sharply at some arbitrary point.

The specific choice of distribution for the threshold may still seem arbitrary. But this seems like a much weaker objection, because it's much harder to avoid in general, e.g. precise cardinal tradeoffs between pleasures, between displeasures, between desires and between different kinds of interests could be similarly arbitrary.

This view may seem somewhat ad hoc. However, I do think treating vagueness/imprecision like normative uncertainty is independently plausible. At any rate, in case some of the things we care about turn out to be vague but we'll want to keep caring about them anyway, we'll want to have a way to deal with vagueness, and whatever that is could be applied here. Treating vagueness like normative uncertainty is just one possibility, which I happen to like.

DMRA could actually favour helping animals of uncertain sentience over helping humans or animals of more probable sentience, if and because helping humans can backfire badly for other animals in case other animals matter a lot (through the meat eater problem and effects on wild animals), and helping vertebrates can also backfire badly for wild invertebrates in case wild invertebrates matter a lot (especially through population effects through land use and fishing). Helping other animals seems less prone to backfire so much for humans, although it can. And helping farmed shrimp and insects seems less prone to backfire so much (relative to potential benefits) for other animals (vertebrates, invertebrates, farmed and wild)

I suppose you might prefer human-helping interventions with very little impact on animals. Maybe mental health? Or, you might combine human-helping interventions to try to mostly cancel out impacts on animals, like life-saving charities + family planning charities, which may have roughly opposite sign effects on animals. And maybe also hedge with some animal-helping interventions to make up for any remaining downside risk for animals. Their combination could be better than primarily animal-targeted interventions under DMRA, or at least inteventions aimed at helping animals unlikely to matter much.

Maybe chicken welfare reforms still look good enough on their own, though, if chickens are likely enough to matter enough, as I think RP showed in the CURVE sequence.

Load more