D

det

97 karmaJoined

Comments
3

det
40
14
0

I'll explain my downvote.

I think the thing you're expressing is fine, and reasonable to be worried about. I think Anthropic should be clear about their strategy. The Google investment does give me pause, and my biggest worry about Anthropic (as with many people, I think) has always been that their strategy could ultimately lead to accelerating capabilities more than alignment.

I just don't think this post expressed that thing particularly well, or in a way I'd expect or want Anthropic to feel compelled to respond to. My preferred version of this would engage with reasons in favor of Anthropic's actions, and how recent actions have concretely differed from what they've stated in the past.

My understanding of (part of) their strategy has always been that they want to work with the largest models, and sometimes release products with the possibility of profiting off of them (hence the PBC structure rather than a nonprofit). These ideas also sound reasonable (but not bulletproof) to me, so I consequently didn't see the Google deal as a sudden change of direction or backstab -- it's easily explainable (although possibly concerning) in my preexisting model of what Anthropic's doing.

So my objection is jumping to a "demand answers" framing, FTX comparisons, and accusations of Machiavellian scheming, rather than an "I'd really like Anthropic to comment on why they think this is good, and I'm worried they're not adequately considering the downsides" framing. The former, to me, requires significantly more evidence of wrongdoing than I'm aware of or you've provided.

det
30
10
1

Here’s an attempt at a meta-level diagnosis of the conversation. My goal is to explain how the EA Forum got filled with race-and-IQ conversations that nobody really wants to be having, with everyone feeling like the other side is at fault for this.

First, the two main characters. 

Alice from Group A is:

  • High-contextualizing
  • Tends to bring up diversity as a value in conversations
  • Finds Bostrom’s apology highly inadequate
  • Absolutely does not want there to be object-level discussions of group IQ differences on the EA forum 
    • Statements that look racist should be strongly challenged, especially since they seem very likely to alienate people from EA.

Bob from Group B is:

  • High-decoupling
  • Tends to bring up epistemics as a value in conversations.[1] 
  • Probably thinks Bostrom’s apology was at least fine, if not great (since they can’t find a sentence in it which, when read literally, expresses something they think is wrong)
  • Thinks conversations on a topic can only be improved by additional true information on that topic
    • So statements that seem false should be strongly challenged.

I’m naturally more of a Group B, but as the discussion has evolved, I think I’ve moved toward understanding and agreeing with the concerns of Group A.[2] Hopefully this allows me to be moderately objective here -- but I expect I’m still biased in the B direction, so I welcome those who are naturally more A to tear this to shreds.

With the groundwork laid, here’s my potted conversation between Alice from A and Bob from B.

Alice: Bostrom’s apology is inadequate. He should completely renounce the position in the old email. Saying there’s a racial IQ gap is completely unacceptable, and he should renounce this too.

Bob:  I understand criticizing Bostrom’s apology, but as far as I can tell he was correct about the existence of an IQ gap. Here, look at these sources I found. You can’t ask him to say something false.

Alice: I absolutely do not want to discuss the question of whether or not there is an IQ gap. Please don’t bring up this question, it will be extremely alienating to tons of people for no benefit.

Bob: Hold up, it seems to me like you made a factual claim about race and IQ before I did. I’m just continuing the conversation you started. Am I not allowed to point out your mistake?

Alice: If you go around discussing questions of race and IQ, people will assume that you’re a racist. It could be ok to discuss this question in narrow contexts in academia, but it’s not ok here and the discussion is going to make us all look bad.

Bob: But you said something false! Are you saying we have to lie for good PR? I don’t support that.

Alice: I’m saying I don’t want to be having this object-level conversation, can't we just agree to condemn racist ideas?

[debate continues, neither side is happy about it.]

  1. ^

    I don’t mean to imply by this framing that diversity and epistemics are inherently in opposition -- this is just an observation that each side mentions one more than the other. I expect both A and B care about both values.

  2. ^

    Remembering other forums that were practically split apart by discussions of group IQ differences was one big update for me toward “discussing this on the EA forum is really bad.” This makes me sympathize more with wishing the conversation could have been avoided at all costs, although I'm less sure what to do going forward.

det
30
19
2

I upvoted this post and think it's a good contribution. The EA community as a whole has done damage to itself the past few days. But I'm worried about what it would mean to support having less epistemic integrity as a community.

This post says both:

If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs.

and

If he'd said, for instance, "hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn't make sense. It's closing, but not fast enough. We should work harder on fixing this." That would be more sensible. Same for the community itself disavowing the explicit racism.

The first quote says believing X (that there exists a racial IQ gap) is harmful and will result in nobody trusting you. The second says X is, in fact, true.[1]

For my own part, I will trust someone less if they endorse statements they think are false. I would also trust someone less if they seemed weirdly keen on having discussions that kinda seem racist. Unfortunately, it seems we're basically having to decide between these two options.

My preferred solution is to -- while being as clear as possible about the context, and taking great care not to cause undue harm -- maintain epistemic integrity. I think "compromising your ability to say true, relevant things in order to be trusted more" is the kind of galaxy-brain PR move that probably doesn't work. You incur the cost of decreased epistemic integrity, and then don't fool anyone else anyway. If I can lose someone's trust by saying something true in a relevant context,[2] then keeping their trust was a fabricated option.

I'm left not knowing what this post wants me to do differently. When I'm in a relevant conversation, I'm not going to lie or dissemble about my beliefs, although I will do my best to present them empathetically and in a way that minimizes harm. But if the main thrust here is "focus somewhat less on epistemic integrity," I'm not sure what a good version of that looks like in practice, and I'm quite worried about it being taken as an invitation to be less trustworthy in the interest of appearing more trustworthy.

  1. ^

    I've seen other discussions where someone seems to both claim "the racial IQ gap is shrinking / has no genetic component / is environmentally caused" and "believing there is a racial IQ gap is, in itself, racist."

  2. ^

    I think another point of disagreement might be whether this has been a relevant context to discuss race and IQ. My position is that if you're in a discussion about how to respond to a person saying X, you're by necessity also in a discussion about whether X is true. You can't have the first conversation and completely bracket the second, as the truth or falsity of X is relevant to whether believing X is worthy of criticism.