I want to find a good thought experiment that makes us appreciate how radically uncertain we should be about the very long-term effects of some actions altruistically motivated actors might take. Some have already been proposed in the 'cluelessness' literature -- a nice overview of which is given by Tarsney et al. (2024, §3)-- but I don't find them ideal, as I'll briefly suggest. So let me propose a new one. Call it the 'Dog vs Cat' dilemma:

Say you are a philanthropy advisor reputed for being unusually good at forecasting various direct and indirect effects donations to different causes can have. You are approached by a billionaire with a deep love for companion animals who wants to donate almost all his wealth to animal shelters. He asks you whether he should donate to dog shelters around the World or to cat shelters instead.[1] Despite the relatively narrow set of options he is considering, he importantly specifies that he does not only care about the short-term effects his donation would have on cats and dogs around the World. He carefully explains and hugely emphasizes that he wants his choice to be the one that is best, all things considered (i.e., not bracketing out effects on beings other than companion animals or effects on the long-term future).[2] You think about his request and, despite your great forecasting abilities, quickly come to appreciate how impossible the task is. The number and the complexity of causal ramifications and potentially decisive flow-through effects to consider are overwhelming. It is highly implausible a donation of that size does not somehow change important aspects of the course of History in some non-negligible ways. Even if it is very indirect, it will inevitably affect many people’s attitudes towards dogs and cats, the way these people live, their values, their consumption, economic growth, technological development, human and animal population sizes, the likelihood of a third World War and the exact actors which would involved, etc. Some aspects of these effects are predictable. Many others are way too chaotic. And you cannot reasonably believe these chaotic changes will be even roughly the same no matter whether the beneficiaries of the donation are dog or cat shelters. If the billionaire picks cats over dogs, this will definitely end up making the World counterfactually better or worse, all things considered, to a significant extent. The problem is you have no idea which it is. In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.

Here's how OpenAI's image generator portrays the scene:

I have two questions for you.

1. Can you think of any reasonable objection to the strongly implied takeaway that the philanthropy advisor should be agnostic about the sign of the overall consequences of the donation, there?

2. Is that a good illustration of the motivations for cluelessness?  I like it more than, e.g., Greaves' (2016) grandma-crossing-the-street example and Mogensen's (2021) 'AMF vs Make-A-Wish Foundation' one because there is no pre-established intuition that one is "obviously" better than the other (so we avoid biases). Also, it is clear in the above thought experiment that our choice matters a bunch despite our cluelessness. It's obvious that the "the future remains unchanged" (/ "ripple in the pond") objection doesn't work (see, e.g., Lenman 2000; Greaves 2016). I also find this story easy to remember. What do you think?

I also hope this thought experiment will be found interesting by some others and that me posting this may be useful beyond just me potentially getting helpful feedback on it.

  1. ^

    For simplicity, let’s assume it can only be 100% one or the other. He cannot split between the two.

  2. ^

    You might wonder why the billionaire only considers donating to dog or cat shelters and not to other causes given that he so crucially cares about the overall effects on the World from now till its end. Well, maybe he has special tax-deductibility benefits from donating to such shelters. Maybe his 12-year-old daughter will get mad at him if he gives to anything else. Maybe the money he wants to give is some sort a coupon that only dog and cat shelters can receive for some reason. Maybe you end up asking him why and he answers ‘none of your business!’. Anyway, this of course does not matter for the sake of the thought experiment.

16

0
0

Reactions

0
0

More posts like this

Comments21
Sorted by Click to highlight new comments since:

For what it's worth, although I do think we are clueless about the long-run (and so overall) consequences of our actions, the example you've given isn't intuitively compelling to me. My intuition wants to say that it's quite possible that the cat vs dog decision ends up being irrelevant for the far future / ends up being washed out.

Sorry, I know that's probably not what you want to hear! Maybe different people have different intuitions.

That's very useful, thanks! I was hoping that it felt like there is no way it gets washed out given that what is such a large portion of the World's resources gets put into this, so really good to know you don't have this intuition reading this (especially if you generally think we are clueless!). 

Maybe I can give a better intuition pump for how the effects will last and ramificate. But, also, maybe talking about cats and dogs makes the decision look too trivial to begin with and other cause area examples would be better.

Thanks again! Glad you shared an intuition that goes against what I was hoping. That was the whole point of me posting this :)

I also didn't find it too compelling, I think partly it is the issue of the choice seeming not important or high-stakes enough. Maybe the philanthropist should be deciding whether to fund clean energy R&D or vaccines R&D, or similar.

I don't think I quite agreed with this, or at least it felt misleading:

And you cannot reasonably believe these chaotic changes will be even roughly the same no matter whether the beneficiaries of the donation are dog or cat shelters.

I think it may be very reasonable to think that in expectation the longterm effects will be 'roughly the same'. This feels more like a simple cluelessness case than complex cluelessness (unless you explain why the cats vs dogs will predictably change economic growth, world values, population size etc).

Whereas the vaccines vs clean energy I think there would be more plausible reasons why one or the other will systematically have different consequences. (Maybe a TB vaccine will save more lives, increasing population and economic growth (including making climate change slightly worse), whereas the clean energy will increase growth slightly, make climate change slightly less bad, and therefore increase population a bit as well, but with a longer lag time.)

Also on your question 1, I think being agnostic about which one is better is quite different to being agnostic about whether something is good at all (in expectation) and I think the first is a significantly easier thing to argue for than the second.

Maybe the philanthropist should be deciding whether to fund clean energy R&D or vaccines R&D, or similar.

I like these examples, especially the fact that it's obvious they impact the long term. My main worry, however, would be that most longtermists will start pretty convinced that we can figure out which one is best without too much trouble (actually, I think they'd even already have an opinion) and that this is not a good example of cluelessness, (even) more so than with something like dogs vs cats.

But very good pointer. I'll try to think of something in the same vein as clean energy vs vaccines but where longtermist would start more agnostic. Maybe two things where the sign on X-risk reduction seems unusually uncertain..

Nice, thanks Oscar! I totally get how it might seem like a case of simple cluelessness. I don't think it actually is but it definitely isn't obvious, yeah. This is a problem. 

Also on your question 1, I think being agnostic about which one is better is quite different to being agnostic about whether something is good at all (in expectation) and I think the first is a significantly easier thing to argue for than the second.

I think I kinda agree but the same way I agree that doing 1 trillion push-ups in a row is significantly harder than doing 1 million. It's technically true in some sense but both are way out of reach anyway. I really don't see how one could make a convincing argument why donating to animal shelters predictably makes the World better or worse, considering all the effects from now until the end of time.

You're welcome! N=1 though, so might be worth seeing what other people think too.

I wonder if the example is weakened by the last sentence:

In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.

Right now I feel like this is a hard question. But it doesn't feel like an impossibly intractable one. I think if the forum spent a week debating this question you'd get some coherent positions staked out -- where after the debate it would still be unreasonable to be very confident in either answer, but it wouldn't seem crazy to think that the balance of probabilities suggested favouring one course of action over the other.

This makes me notice that the cats and dogs question feels different only in degree, not kind. I think if you had a bunch of good thinkers consider it in earnest for some months, they wouldn't come out indifferent. I'd hazard that it would probably be worth >$0.01 (in expectation, on longtermist welfarist grounds) to pay to switch which kind of shelter the billions went to. But I doubt it would be worth >$100. And at that point it wouldn't be worth the analysis to get to the answer.

But having written that, I notice that the example helped me to articulate my thoughts on cluelessness! Which makes it seem like actually a pretty helpful example. :)

(And maybe this is kind of the point -- that cluelessness isn't an absolute of "we cannot hope even in principle to say anything here", but rather a pragmatic barrier of "it's never gonna be worth taking the time to know".)

Interesting, thanks a lot! 

Fwiw, I wrote this, which sort of goes against your impression, in another comment thread here:

I really don't see how one could make a convincing argument why donating to animal shelters predictably makes the World better or worse, considering all the effects from now until the end of time.

The problem is we can't just update away from agnosticism based on arguments that don't address the very reasons for our agnosticism. In the DogvCat story, one key driver of my cluelessness is that I think there will always be crucial considerations we are unaware of, because we're missing them or couldn't even comprehend them (see Roussos 2021; Tarsney et al 2024, §3), and I can't conveniently assume good and bad unknown unknowns 'cancel out' (Lenman 2000; Greaves 2016; Tarsney et al 2024, §3). For me to quit agnosticism, we'd have to find an argument robust to these unknown unknowns (and I'd be surprised if we find one). Arguments that don't address unknown unknowns don't address my cluelessness at all and it seems like they shouldn't make me update. This is an instance of what Miriam Shoenfield (2012) calls 'insensitivity to mild sweetening'.

But it'd be hard for me to make a case more convincing than this without unpacking a lot more (which I'll do properly someday somewhere, hopefully). And your point that my thought experiment is weakened by the fact that the last sentence doesn't seem obviously right at all (at least if we assume that we are given more resources to think hard about the question) is still well taken! That's a very fair and helpful observation :)

Just on this point:

I can't conveniently assume good and bad unknown unknowns 'cancel out'

FWIW, my take would be:

  • No, we shouldn't assume that they "cancel out"
  • However, as a structural fact[*] about the world, the prevalence of good and bad unknown unknowns are correlated with the good and bad knowns (and known unknowns)
  • So, on average and in expectation, things will point in the same direction as the analysis ignoring cluelessness (although it's worth being conscious that this will turn out wrong in a significant fraction of cases ― probably approaching 50% for something like cats vs dogs)

Of course this relies heavily on the "fact" I denoted as [*], but really I'm saying "I hypothesise this to be a fact". My reasons for believing it are something like:

  • Some handwavey argument along these lines:
    • Among the many complex things we could consider, they will vary in the proportion of considerations that point in a good direction
    • If our knowledge sampled randomly from the available considerations, we would expect this correlation
    • It's too much to expect our knowledge to sample randomly ― there will surely sometimes be structural biases ― but there's no reason to expect the deviations to be so perverse as to (on average) actively mislead
      • (this needn't preclude the existence of some domains with such a perverse pattern, but I'd want a positive argument that something might be such a domain)
    • Given that we shouldn't expect the good and bad unknown unknowns to cancel out, by default we should expect them to correlate with the knowns
  • A sense that empirically this kind of correlation is true in less clueless-like situations
    • e.g. if I uncover a new considerations about whether it's good or bad for EAs to steal-to-give, it's more likely to point to "bad" than "good"
    • Combined with something like a simplicity prior ― if this effect exists for things where we have a fairly strong sense of the considerations we can track, by default I'd expect it to exist in weaker form for things where we have a weaker sense of the considerations we can track (rather than being non-existent or occurring in a perverse form)

In principle, this could be tested experimentally. In practice, you're going to be chasing after tiny effect sizes with messy setups, so I don't think it's viable any time soon for human judgement. I do think you might hope to one day run experiments along these lines for AI systems. Of course they would have to be cases where we have some access to the ground truth, but the AI is pretty clueless -- perhaps something like getting non-superintelligent AI systems to predict outcomes in a complex simulated world.

Thanks a lot for developing on that! To confirm whether we've identified at least one of the cruxes, I'd be curious to know what you think of what follows.

Say I am clueless about the (dis)value of the alien counterfactual we should expect (i.e., whether another civ someday replacing our own after we go extinct or something would be better or worse than if it was ours maintaining control over our corner of the Universe). One consideration I have identified is that there is, all else equal, a selection effect against caring about suffering for grabby civs. But all else is ofc not equal and there might be plenty of considerations I haven't thought of and/or never will be aware of supporting the opposite or other relevant considerations that have nothing to do with care for suffering. I'm clueless. By, 'I'm clueless', I don't mean 'I have a 50% credence the alien counterfactual is better'. Instead, I mean 'my credence is severely indeterminate/imprecise, such that I can't compute the expected value of reducing X-risks (unless I decide to give up on impartial consequentialism and ignore things like the alien counterfactual which I'm clueless about)' (for a case for how cluelessness threatens expected value reasoning in such a way, see e.g. Mogensen 2021).

Your above argument is based on the assumption that our credences all ought to be determinate/precise and that cluelessness = 50% credence, right? It's probably not worth discussing further in here whether this assumption is justified but do you also think that's one of the cruxes, here?

I think this is at least in the vicinity of a crux?

My immediate thoughts (I'd welcome hearing about issues with these views!):

  • I don't think our credences all ought to be determinate/precise
  • But I've also never been satisfied with any account I've seen of indeterminate/imprecise credences
    • (though noting that there's a large literature there and I've only seen a tiny fraction of it)
  • My view would be something more like:
    • As boundedly rational actors, it makes sense for a lot of our probabilities to be imprecise
    • But this isn't a fundamental indeterminacy — rather, it's a view that it's often not worth expending the cognition to make them more precise
    • By thinking longer about things, we can get the probabilities to be more precise (in the limit converging on some precise probability)
    • At any moment, we have credence (itself kind of imprecise absent further thought) about where our probabilities will end up with further thought
    • What's the point of tracking all these imprecise credences rather than just single precise best-guesses?
      • It helps to keep tabs on where more thinking might be helpful, as well as where you might easily be wrong about something
  • On this perspective, cluelessness = inability to get the current best guess point estimate of where we'd end up to deviate from 50% by expending more thought

I've also never been satisfied with any account I've seen of indeterminate/imprecise credences

I'd be keen to hear more why you're unsatisfied with these accounts. 

But this isn't a fundamental indeterminacy — rather, it's a view that it's often not worth expending the cognition to make them more precise

Just to be clear, are you saying: "It's a view that, for all/most indeterminate credences we might have, our prioritization decisions (e.g. whether intervention X is net-good or net-bad) aren't sensitive to variation within the ranges specified by these credences"?

At any moment, we have credence (itself kind of imprecise absent further thought) about where our probabilities will end up with further thought

If your estimate of your ideal-precise-credence-in-the-limit is itself indeterminate, that seems like a big deal — you have no particular reason to adopt a determinate credence then, seems to me. (Maybe by "kind of" you mean to allow for a degree of imprecision that isn't decision-relevant, per my question above?)

What's the point of tracking all these imprecise credences rather than just single precise best-guesses?

Because if the sign of intervention X for the long-term varies across your range of credences, that means you don't have a reason to do X on total-EV grounds. This seems hugely decision-relevant to me, if we have other decision procedures under cluelessness available to us other than committing to a precise best guess, as I think we do (see this comment).

ETA: I'm also curious whether, if you agreed that we aren't rationally obligated to assign determinate credences in many cases, you'd agree that your arguments about unknown unknowns here wouldn't work. (Because there's no particular reason to commit to one "simplicity prior," say. And the net direction of our biases on our knowledge-sampling processes could be indeterminate.)

I'd be keen to hear more why you're unsatisfied with these accounts.

With the warning that this may be unsatisfying, since this is recounting a feeling I've had historically, and I'm responding to my impression about a range of accounts, rather than providing sharp complaints about a particular account:

  • Accounts of imprecise credences seem typically to produce something like ranges of probabilities and then treat these as primitives
  • I feel confusion about "where does the range come from? what's it supposed to represent?"
    • Honestly this echoes some of my unease about precise credences in the first place!
  • So I am into exploration of imprecise credences as a tool for modelling/describing the behaviour of boundedly rational actors (including in some contexts as a normative ideal for them to follow)
  • But I think I get off the train before reification of the imprecise credences as a thing unto themselves

(that's incomplete, but I think it's the first-order bit of what seems unsatisfying)

Just to be clear, are you saying: "It's a view that, for all/most indeterminate credences we might have, our prioritization decisions (e.g. whether intervention X is net-good or net-bad) aren't sensitive to variation within the ranges specified by these credences"?

Definitely not saying that!

Instead I'm saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren't enough to justify it, so they shouldn't do the thinking.

If your estimate of your ideal-precise-credence-in-the-limit is itself indeterminate, that seems like a big deal — you have no particular reason to adopt a determinate credence then, seems to me. 

I don't see probabilities as magic absolutes, rather than a tool. Sometimes it seems helpful to pluck a number out of the air and roll with that (and that to be better practice than investing cognition in keeping track of an uncertainty range).

That said, I'm not sure it's crucial to me to model there being a single precise credence that is being approximated. What feels more important is to be able to model the (common) phenomenon where you can reduce your uncertainty by investing more time thinking.

Later in your comment you use the phrase "rationally obligated". I find I tend to shy away from that phrase in this context, because of vagueness about whether it means for fully rational or boundedly rational actors. In short:

  • I'm sympathetic to the idea that fully rational actors should have precise credences
    • (for the normal vNM kind of reasons)
    • I don't want to fully commit to that view, but it also doesn't seem to me to be cruxy
  • I don't think that boundedly rational actors are rationally obliged to have precise credences
  • But I don't think that entails giving up on the idea of them making progress towards something (that I might think of as "the precise credence a fully rational version of them would have") by thinking more, by saying "you have no reason to adopt a precise credence"

Because if the sign of intervention X for the long-term varies across your range of credences, that means you don't have a reason to do X on total-EV grounds.

I reject this claim. For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I'll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I'd end up (somewhere close to 50%), and think that this is a good bet to take -- rather than saying that EV somehow doesn't give me any reason to like the bet.

This seems hugely decision-relevant to me, if we have other decision procedures under cluelessness available to us other than committing to a precise best guess, as I think we do

For what it's worth I'm often pretty sympathetic to other decision procedures than committing to a precise best guess (cluelessness or not).

ETA: I'm also curious whether, if you agreed that we aren't rationally obligated to assign determinate credences in many cases, you'd agree that your arguments about unknown unknowns here wouldn't work. (Because there's no particular reason to commit to one "simplicity prior," say. And the net direction of our biases on our knowledge-sampling processes could be indeterminate.)

I don't think I'd agree with that. Although I could see saying "yes, this is a valid argument about unknown unknowns; however, it might be overwhelmed by as-yet-undiscovered arguments about unknown unknowns that point in the other direction, so we should be suspicious of resting too much on it".

Oh my bad. I don't think it's really a crux, then. Or not the most key one at least. I guess I can't narrow it down to more precise than whether your "fact[*]" is true, in that case. And it looks like I misunderstood the assumptions behind your justification of it. 

I'll brush upon my little knowledge of the literature on unawareness -- maybe dive deeper -- and see to what extent your "fact[*]" was already discussed. I'm sure it was. Then, I'll go back to your justification of it to see if I understand it better and whether I actually can say I disagree.

Thanks for all your thoughts!

Surely we should have nonzero credence, and maybe even >10% that there aren't any crucial considerations we are missing that are on the scale of 'consider nonhumans' or 'consider future generations'. In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that. Which could still move us slightly away from pure agnosticism?

Your view seems to imply the futility of altruistic endeavour? Which of course doesn't mean it is incorrect, just seems like an important implication.

In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that.

Ah nice, so this could mean two different things:

A. (The ‘canceling out’ objection to (complex) cluelessness:) We assume that good and bad unpredictable effects “cancel each other out” such that we are warranted to believe whatever option is best according to predictable effects is also best according to overall effects, OR

B. (Giving up on impartial consequentialism:) We reconsider what matters for our decision and simply decide to stop caring about whether our action makes the World better or worse, all things considered. Instead, we focus only on whether the parts of the World that are predictably affected a certain way are made better or worse and/or about things that have nothing to do with consequences (e.g., our intentions), and ignore the actual overall long-term impact of our decision which we cannot figure out.

I think A is a big epistemic mistake for the reasons given by, e.g., Lenman 2000; Greaves 2016; Tarsney et al 2024, §3.

Some version of B might be the right response in the scenario where we don't know what else to do anyway? I don't know. One version of B is explicitly given by Lenman who says we should reject consequentialism. Another is implicitly given by Tarsney (2022) when he says we should focus on the next thousands of years and sort of admit we have no idea what our impact is beyond that. But then we're basically saying that we "got beaten" by cluelessness and are giving up on actually trying to improve the long-term future, overall (which is what most longtermists are claiming our goal should be, for compelling ethical reasons). We can very well endorse B, but then we can't pretend we're trying to actually predictably improve the World. We're not. We're just trying to improve some aspects of the World, ignoring how this affects things overall (since we have no idea).

Your view seems to imply the futility of altruistic endeavour?

If you replace "altruistic endeavour" by "impartial consequentialism", in the DogvCat case, yes, absolutely. But I didn't mean to imply that cluelessness in that case generalizes to everything (although I'm also not arguing it doesn't). There might be cases where we have arguments plausibly robust to many unknown unknowns that warrant updating away from agnosticism, e.g., arguments based on logical inevitabilities or unavoidable selection effects. In this thread, I've only argued that I'd be surprised if we find such (convincing) argument for the DogVCat case, specifically. But it may very well be that this generalizes to many other cases and that we should be agnostic about many other things, to the extent that we actually care about our overall impact.

And I absolutely agree that this is an important implication of my points here. I think the reason why these problems are neglected by sympathizers of longtermism is that they (unwarrantedly) endorse A or (also unwarrantedly) assume that the fact that 'wild guesses' are often better than agnosticism in short-term geopolitical forecasting means they're also better when it comes to predicting our overall impact on the long-term future (see 'Winning isn't enough').

I think I am quite sympathetic to A, and to the things Owen wrote in the other branch, especially about operationalizing imprecise credences. But this is sufficiently interesting and important-seeming that I am noting to read later some of the references you give to justify A being false.

Oh interesting, I would have guessed you'd endorse some version of B or come up with a C, instead. 

Iirc, these resources I referenced don't directly address Owen's points to justify A, though. Not sure. I'll look into this and where they might be more straightforwardly addressed, since this seems quite important w.r.t. the work I'm currently doing. Happy to keep you updated if you want.

yeah sure, lmk what you find out!

Executive summary: The "Dog vs Cat" thought experiment illustrates how radically uncertain we should be about the very long-term effects of altruistic actions, even for narrow decisions like donating to dog vs cat shelters.

Key points:

  1. A billionaire wants to donate his wealth to either dog or cat shelters worldwide, caring about all long-term consequences, not just direct effects on companion animals.
  2. Even for this narrow decision, the number and complexity of causal ramifications and flow-through effects are overwhelming and impossible to predict.
  3. The donation will inevitably affect attitudes, values, consumption, economic growth, technological development, populations, geopolitical events, etc. in chaotic and unpredictable ways.
  4. The philanthropy advisor should arguably be agnostic about whether the overall consequences will be positive or negative.
  5. This thought experiment may be a compelling illustration of the motivations for "cluelessness" about long-term effects, avoiding some shortcomings of previous examples.
  6. The story makes it clear that the choice matters significantly despite our cluelessness, and the "future remains unchanged" objection does not apply.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities