OCB

Owen Cotton-Barratt

10085 karmaJoined

Sequences
3

Reflection as a strategic goal
On Wholesomeness
Everyday Longermism

Comments
912

Topic contributions
3

I'd be keen to hear more why you're unsatisfied with these accounts.

With the warning that this may be unsatisfying, since this is recounting a feeling I've had historically, and I'm responding to my impression about a range of accounts, rather than providing sharp complaints about a particular account:

  • Accounts of imprecise credences seem typically to produce something like ranges of probabilities and then treat these as primitives
  • I feel confusion about "where does the range come from? what's it supposed to represent?"
    • Honestly this echoes some of my unease about precise credences in the first place!
  • So I am into exploration of imprecise credences as a tool for modelling/describing the behaviour of boundedly rational actors (including in some contexts as a normative ideal for them to follow)
  • But I think I get off the train before reification of the imprecise credences as a thing unto themselves

(that's incomplete, but I think it's the first-order bit of what seems unsatisfying)

Just to be clear, are you saying: "It's a view that, for all/most indeterminate credences we might have, our prioritization decisions (e.g. whether intervention X is net-good or net-bad) aren't sensitive to variation within the ranges specified by these credences"?

Definitely not saying that!

Instead I'm saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren't enough to justify it, so they shouldn't do the thinking.

If your estimate of your ideal-precise-credence-in-the-limit is itself indeterminate, that seems like a big deal — you have no particular reason to adopt a determinate credence then, seems to me. 

I don't see probabilities as magic absolutes, rather than a tool. Sometimes it seems helpful to pluck a number out of the air and roll with that (and that to be better practice than investing cognition in keeping track of an uncertainty range).

That said, I'm not sure it's crucial to me to model there being a single precise credence that is being approximated. What feels more important is to be able to model the (common) phenomenon where you can reduce your uncertainty by investing more time thinking.

Later in your comment you use the phrase "rationally obligated". I find I tend to shy away from that phrase in this context, because of vagueness about whether it means for fully rational or boundedly rational actors. In short:

  • I'm sympathetic to the idea that fully rational actors should have precise credences
    • (for the normal vNM kind of reasons)
    • I don't want to fully commit to that view, but it also doesn't seem to me to be cruxy
  • I don't think that boundedly rational actors are rationally obliged to have precise credences
  • But I don't think that entails giving up on the idea of them making progress towards something (that I might think of as "the precise credence a fully rational version of them would have") by thinking more, by saying "you have no reason to adopt a precise credence"

Because if the sign of intervention X for the long-term varies across your range of credences, that means you don't have a reason to do X on total-EV grounds.

I reject this claim. For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I'll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I'd end up (somewhere close to 50%), and think that this is a good bet to take -- rather than saying that EV somehow doesn't give me any reason to like the bet.

This seems hugely decision-relevant to me, if we have other decision procedures under cluelessness available to us other than committing to a precise best guess, as I think we do

For what it's worth I'm often pretty sympathetic to other decision procedures than committing to a precise best guess (cluelessness or not).

ETA: I'm also curious whether, if you agreed that we aren't rationally obligated to assign determinate credences in many cases, you'd agree that your arguments about unknown unknowns here wouldn't work. (Because there's no particular reason to commit to one "simplicity prior," say. And the net direction of our biases on our knowledge-sampling processes could be indeterminate.)

I don't think I'd agree with that. Although I could see saying "yes, this is a valid argument about unknown unknowns; however, it might be overwhelmed by as-yet-undiscovered arguments about unknown unknowns that point in the other direction, so we should be suspicious of resting too much on it".

I think this is at least in the vicinity of a crux?

My immediate thoughts (I'd welcome hearing about issues with these views!):

  • I don't think our credences all ought to be determinate/precise
  • But I've also never been satisfied with any account I've seen of indeterminate/imprecise credences
    • (though noting that there's a large literature there and I've only seen a tiny fraction of it)
  • My view would be something more like:
    • As boundedly rational actors, it makes sense for a lot of our probabilities to be imprecise
    • But this isn't a fundamental indeterminacy — rather, it's a view that it's often not worth expending the cognition to make them more precise
    • By thinking longer about things, we can get the probabilities to be more precise (in the limit converging on some precise probability)
    • At any moment, we have credence (itself kind of imprecise absent further thought) about where our probabilities will end up with further thought
    • What's the point of tracking all these imprecise credences rather than just single precise best-guesses?
      • It helps to keep tabs on where more thinking might be helpful, as well as where you might easily be wrong about something
  • On this perspective, cluelessness = inability to get the current best guess point estimate of where we'd end up to deviate from 50% by expending more thought

Just on this point:

I can't conveniently assume good and bad unknown unknowns 'cancel out'

FWIW, my take would be:

  • No, we shouldn't assume that they "cancel out"
  • However, as a structural fact[*] about the world, the prevalence of good and bad unknown unknowns are correlated with the good and bad knowns (and known unknowns)
  • So, on average and in expectation, things will point in the same direction as the analysis ignoring cluelessness (although it's worth being conscious that this will turn out wrong in a significant fraction of cases ― probably approaching 50% for something like cats vs dogs)

Of course this relies heavily on the "fact" I denoted as [*], but really I'm saying "I hypothesise this to be a fact". My reasons for believing it are something like:

  • Some handwavey argument along these lines:
    • Among the many complex things we could consider, they will vary in the proportion of considerations that point in a good direction
    • If our knowledge sampled randomly from the available considerations, we would expect this correlation
    • It's too much to expect our knowledge to sample randomly ― there will surely sometimes be structural biases ― but there's no reason to expect the deviations to be so perverse as to (on average) actively mislead
      • (this needn't preclude the existence of some domains with such a perverse pattern, but I'd want a positive argument that something might be such a domain)
    • Given that we shouldn't expect the good and bad unknown unknowns to cancel out, by default we should expect them to correlate with the knowns
  • A sense that empirically this kind of correlation is true in less clueless-like situations
    • e.g. if I uncover a new considerations about whether it's good or bad for EAs to steal-to-give, it's more likely to point to "bad" than "good"
    • Combined with something like a simplicity prior ― if this effect exists for things where we have a fairly strong sense of the considerations we can track, by default I'd expect it to exist in weaker form for things where we have a weaker sense of the considerations we can track (rather than being non-existent or occurring in a perverse form)

In principle, this could be tested experimentally. In practice, you're going to be chasing after tiny effect sizes with messy setups, so I don't think it's viable any time soon for human judgement. I do think you might hope to one day run experiments along these lines for AI systems. Of course they would have to be cases where we have some access to the ground truth, but the AI is pretty clueless -- perhaps something like getting non-superintelligent AI systems to predict outcomes in a complex simulated world.

But having written that, I notice that the example helped me to articulate my thoughts on cluelessness! Which makes it seem like actually a pretty helpful example. :)

(And maybe this is kind of the point -- that cluelessness isn't an absolute of "we cannot hope even in principle to say anything here", but rather a pragmatic barrier of "it's never gonna be worth taking the time to know".)

I wonder if the example is weakened by the last sentence:

In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.

Right now I feel like this is a hard question. But it doesn't feel like an impossibly intractable one. I think if the forum spent a week debating this question you'd get some coherent positions staked out -- where after the debate it would still be unreasonable to be very confident in either answer, but it wouldn't seem crazy to think that the balance of probabilities suggested favouring one course of action over the other.

This makes me notice that the cats and dogs question feels different only in degree, not kind. I think if you had a bunch of good thinkers consider it in earnest for some months, they wouldn't come out indifferent. I'd hazard that it would probably be worth >$0.01 (in expectation, on longtermist welfarist grounds) to pay to switch which kind of shelter the billions went to. But I doubt it would be worth >$100. And at that point it wouldn't be worth the analysis to get to the answer.

Given this, my worry is that expressing things like "EA aims to be maximizing in the second sense only" may be kind of gaslight-y to some people's experience (although I agree that other people will think it's a fair summary of the message they personally understood).

I largely agree with this, but I feel like your tone is too dismissive of the issue here? Like: the problem is that the maximizing mindset (encouraged by EA), applied to the question of how much to apply the maximizing mindset, says to go all in. This isn't getting communicated explicitly in EA materials, but I think it's an implicit message which many people receive. And although I think that it's unhealthy to think that way, I don't think people are dumb for receiving this message; I think it's a pretty natural principled answer to reach, and the alternative answers feel unprincipled.

On the types of maximization: I think different pockets of EA are in different places on this. I think it's not unusual, at least historically, for subcultures to have some degree of lionization of 1). And there's a natural internal logic to this: if doing some good well is good, surely doing more is better?

On the potential conflicts between ethics and self-interest: I agree that it's important to be nuanced in how this is discussed.

But:

  1. I think there's a bunch of stuff here which isn't just about those conflicts, and that there is likely potential for improvements which are good on both prudential and impartial grounds.

  2. Navigating real tensions is tricky, because we want to be cooperative in how we sell the ideas. cf. https://forum.effectivealtruism.org/posts/C665bLMZcMJy922fk/what-is-valuable-about-effective-altruism-implications-for

I really appreciated this post. I don't agree with all of it, but I think that it's an earnest exploration of some important and subtle boundaries.

The section of the post that I found most helpful was "EA ideology fosters unsafe judgment and intolerance". Within that, the point that I found most striking was: that there's a tension in how language gets used in ethical frameworks and in mental wellbeing frameworks, and people often aren't well equipped with the tools to handle those tensions. This ... basically just seems correct? And seems like a really good dynamic for people to be tracking.

Something which I kind of wish you'd explored a bit more is ways in which EA may be helpful for people's mental health. You get at that a bit when talking about how/why it appeals to people, and seem to acknowledge that there are ways in which it can be healthy for people to engage, but I think that we'll get faster to a better/deeper understanding of the dynamics if we try to look honestly at the ways in which it can be good for people as well as bad, as well as what levels of tradeoff in terms of potentially being bad for people are worth accepting (I think the correct answer will be "a little bit", in that there's no way to avoid all harms without just not being in the space at all, and I think that would be a clear mistake for EA; though I am also inclined to think that the correct answer is "somewhat less than at present").

Load more