LessWrong/Lightcone Infrastructure
Maybe useful: "Latently controversial" – there's no public controversy because people didn't know about it, but if people had more information, there would be public controversy. I think this would perhaps be more the case with Manifest if the article hadn't come out, but it's still reasonable to consider Manifest to have some inherent potential "controversialness" given choice of speakers.
I think if the only thing claiming controversy was the article, it might make sense to call that fabricated/false claim by an outsider journalist, but given this post and the fact many people either disapprove or want to avoid Manifest, (and also that Austin writes about consciously deciding to invite people they thought were edgy,) means I think it's just actually just a reasonable description.
And there's disanalogy there. Racism is about someone's beliefs and behaviors, and I can't change those of someone's else's with a label. But controversy means people disagree, disapprove, etc. and someone can make someone else's belief controversial just by disagreeing with it (or if one disagreement isn't enough to be controversy, a person contributes to it with their disagreement).
RobertM and I are having a "dialogue"[1] on LessWrong with a lot of focus on whether it was appropriate for this to be posted when it was and with info collected so far (e.g. not waiting for Nonlinear response).
What is the optimal frontier for due diligence?
I think it matters a lot to be precise with claims here. If someone believes that any case of people with power over others asking them to commit crimes is damning, then all we need to establish is that this happened. If it's understood that whether this was bad depends on the details, then we need to get into the details. Jack's comment was not precise so it felt important to disambiguate (and make the claim I think is correct).
My guess is it was enough time to say which claims you objected to and sketch out the kind of evidence you planned to bring. And Ben judged that your response didn't indicate you were going to bring anything that would change his mind enough that the info he had was worth sharing. E.g. you seemed to focus on showing that Alice couldn't be trusted, but Ben felt that this would not refute enough of the other info he had collected / the kinds of refutation (e.g. only a $50 for driving without a license, she brought back illegal substances anyway) were not compelling enough to change that the info was worth sharing.
I do think one can make judgments from the meta info, and 3 hours is enough to get a lot of that.
I consider something of a missing mood on your part to be quite damning. From what I hear and see (Ben's report of your call with him, how you're responding public, threat to Lightcone/Ben), you are overwhelmingly concerned with defending yourself and don't seem contrite at all that people you employed feel so extremely hurt by their time with you. I haven't heard you dispute their claims of hurt (do you think those are lies for some reason?), instead focusing on the veracity of reasons for being hurt. But do you think you're causally entangled with them feeling hurt? If so, where is the apology or contrition and horror at yourself that they think being with you resulted in the worst months of their lives?
I'd understand a lack of that if your position was "they're definitely lying about how they felt probably for motivation X, give us time and can prove that", but this hasn't been the nature of your response.
I actually would expect more "competent" uncompassionate people concerned only with their own reputation to have acted contrite, because it'd make the audience more sympathetic, suggesting that you all aren't very good at modeling people. Which makes it more likely you weren't modeling your employees experience very well either, perhaps resulting in a lot of harm from negligence more than malice (which still warrants sharing this info about you).
crossposting from LessWrong since I think this is more common on EA Forum
At first blush, I find this common caveat amusing.
1. If there are errors, we can infer that those providing feedback were unable to identify them.
2. If the author was fallible enough to have made errors, perhaps they are are fallible enough to miss errors in input sourced from others.
What purpose does it serve? Given its often paired with "credit goes to..<list of names> it seems like an attempt that people providing feedback/input to a post are only exposed to upside from doing so, and the author takes all the downside reputation risk if the post is received poorly or exposed as flawed.
Maybe this works? It seems that as a capable reviewer/feedback haver, I might agree to offer feedback on a poor post written by a poor author, perhaps pointing out flaws, and my having given feedback on it might reflect poorly on my time allocation, but the bad output shouldn't be assigned to me. Whereas if my name is attached to something quite good, it's plausible that I contributed to that. I think because it's easier to help a good post be great than to save a bad post.
But these inferences seem like they're there to be made and aren't changed by what an author might caveat at the start. I suppose the author might want to remind the reader of them rather than make them true through an utterance.
Upon reflection, I think (1) doesn't hold. The reviewers/input makers might be aware of the errors but be unable to save the author from them. (2) That the reviewers made mistakes that have flowed into the piece seems all the more likely the worse the piece is overall, since we can update that the author wasn't likely to catch them.
On the whole, I think I buy the premise that we can't update too much negatively on reviewers and feedback givers from them having deigned to give feedback on something bad, though their time allocation is suspect. Maybe they're bad at saying no, maybe they're bad at dismissing people's ideas aren't that good, maybe they have hope for this person. Unclear. Upside I'm more willing to attribute.
Perhaps I would replace the "errors are my my own[, credit goes to]" with a reminder or pointer that these are the correct inferences to make. The words themselves don't change them? Not sure, haven't thought about this much.
Edited To Add: I do think "errors are my own" is a very weird kind of social move that's being performed in an epistemic contexts and I don't like.