OCB

Owen Cotton-Barratt

10023 karmaJoined

Sequences
3

Reflection as a strategic goal
On Wholesomeness
Everyday Longermism

Comments
907

Topic contributions
3

Given this, my worry is that expressing things like "EA aims to be maximizing in the second sense only" may be kind of gaslight-y to some people's experience (although I agree that other people will think it's a fair summary of the message they personally understood).

I largely agree with this, but I feel like your tone is too dismissive of the issue here? Like: the problem is that the maximizing mindset (encouraged by EA), applied to the question of how much to apply the maximizing mindset, says to go all in. This isn't getting communicated explicitly in EA materials, but I think it's an implicit message which many people receive. And although I think that it's unhealthy to think that way, I don't think people are dumb for receiving this message; I think it's a pretty natural principled answer to reach, and the alternative answers feel unprincipled.

On the types of maximization: I think different pockets of EA are in different places on this. I think it's not unusual, at least historically, for subcultures to have some degree of lionization of 1). And there's a natural internal logic to this: if doing some good well is good, surely doing more is better?

On the potential conflicts between ethics and self-interest: I agree that it's important to be nuanced in how this is discussed.

But:

  1. I think there's a bunch of stuff here which isn't just about those conflicts, and that there is likely potential for improvements which are good on both prudential and impartial grounds.

  2. Navigating real tensions is tricky, because we want to be cooperative in how we sell the ideas. cf. https://forum.effectivealtruism.org/posts/C665bLMZcMJy922fk/what-is-valuable-about-effective-altruism-implications-for

I really appreciated this post. I don't agree with all of it, but I think that it's an earnest exploration of some important and subtle boundaries.

The section of the post that I found most helpful was "EA ideology fosters unsafe judgment and intolerance". Within that, the point that I found most striking was: that there's a tension in how language gets used in ethical frameworks and in mental wellbeing frameworks, and people often aren't well equipped with the tools to handle those tensions. This ... basically just seems correct? And seems like a really good dynamic for people to be tracking.

Something which I kind of wish you'd explored a bit more is ways in which EA may be helpful for people's mental health. You get at that a bit when talking about how/why it appeals to people, and seem to acknowledge that there are ways in which it can be healthy for people to engage, but I think that we'll get faster to a better/deeper understanding of the dynamics if we try to look honestly at the ways in which it can be good for people as well as bad, as well as what levels of tradeoff in terms of potentially being bad for people are worth accepting (I think the correct answer will be "a little bit", in that there's no way to avoid all harms without just not being in the space at all, and I think that would be a clear mistake for EA; though I am also inclined to think that the correct answer is "somewhat less than at present").

it sounds like you see weak philosophical competence as being part of intent alignment, is that correct?

Ah, no, that's not correct.

I'm saying that weak philosophical competence would:

  • Be useful enough for acting in the world, and in principle testable-for, that I expect it be developed as a form of capability before strong superintelligence
  • Be useful for research on how to produce intent-aligned systems

... and therefore that if we've been managing to keep things more or less intent aligned up to the point where we have systems which are weakly philosophical competent, it's less likely that we have a failure of intent alignment thereafter. (Not impossible, but I think a pretty small fraction of the total risk.)

Yeah, I appreciated your question, because I'd also not managed to unpack the distinction I was making here until you asked.

On the minor issue: right, I think that for some particular domain(s), you could surely train a system to be highly competent in that domain without this generalizing to even weak philosophical competence overall. But if you had a system which was strong at both of those domains despite not having been trained on them, and especially if that was also true for say three more comparable domains, I guess I kind of do expect it to be good at the general thing? (I haven't thought long about that.)

It's not clear we have too much disagreement, but let me unpack what I meant:

  • Let strong philosophical competence mean competence at all philosophical questions, including those like metaethics which really don't seem to have any empirical grounding
    • I'm not trying to make any claims about strong philosophical competence
    • I might be a little more optimistic than you about getting this by default as a generalization of weak philosophical competence (see below), but I'm still pretty worried that we won't get it, and I didn't mean to rely on it in my statements in this post
  • Let weak philosophical competence mean competence at reasoning about complex questions which ultimately have empirical answers, where it's out of reach to test them empirically, but one may get better predictions from finding clear frameworks for thinking about them
  • I claim that by the time systems approach strong superintelligence, they're likely to have a degree of weak philosophical competence
    • Because:
      • It would be useful for many tasks, and this would likely be apparent to mild superintelligent systems
      • It can be selected for empirically (seeing which training approaches etc. do well at weak philosophical competence in toy settings, where the experimenters have access to the ground truth about the questions they're having the systems use philosophical reasoning to approach)
  • I further claim that weak philosophical competence is what you need to be able to think about how to build stronger AI systems that are, roughly speaking, safe, or intent aligned
    • Because this is ultimately an empirical question ("would this AI do something an informed version of me / those humans would ultimately regard as terrible?")
    • I don't claim that this would extend to being able to think about how to build stronger AI systems that it would be safe to make sovereigns

IDK, structurally your argument here reminds me of arguments that we shouldn't assume animals are conscious, since we can only generalise from human experiences. (In both cases I feel like there's not nothing to the argument, but I'm overall pretty uncompelled.)

Load more