LG

Lukas_Gloor

6567 karmaJoined

Sequences
1

Moral Anti-Realism

Comments
536

"Influence-seeking" doesn't quite resonate with me as a description of the virtue on the other end of "truth-seeking."

What's central in my mind when I speak out against putting "truth-seeking" above everything else is mostly a sentiment of "I really like considerate people and I think you're driving out many people who are considerate, and a community full of disagreeable people is incredibly off-putting."

Also, I think considerateness axis is not the same as the decoupling axis. I think one can be very considerate and also great at decoupling; you just have to be able to couple things back together as well.

Good points! It seems good to take a break or at least move to the meta level.

I think one emotion that is probably quite common in discussions about what norms should be (at least in my own experience) is clinging. Quoting from Joe Carlsmith's post on it:

Clinging, as I think about it, is a certain mental flavor or cluster of related flavors. It feels contracted, tight, clenched, and narrow. It has a kind of hardness, a “not OK-ness,” and a (sometimes subtle) kind of desperation. It sees scarcity. It grabs. It sees threat. It pushes away. It carries seeds of resentments and complaints. [...]

Often, in my experience, clinging seems to hijack attention and agency. It makes it harder to think, weigh considerations, and respond. You are more likely to flail, or stumble around, or to “find yourself” doing something rather than choosing to do it. And you’re more likely, as well, to become pre-occupied by certain decisions — especially if both options involve things you’re clinging in relation to — or events. Indeed, clinging sometimes seems like it treats certain outcomes as “infinitely bad,” or at least bad enough that avoiding them is something like a hard constraint. This can cause consequent problems with reasoning about what costs to pay to avoid what risks.

Clinging is also, centrally, unpleasant. But it’s a particular type of unpleasant, which feels more like it grabs and restricts and distorts who you are than e.g. a headache.

In the midst of feeling like a lot is at stake and one's values are being threatened, we may often try to push the social pendulum in our desired direction as hard as possible. However, that will have an aggravating and polarizing effect on the debate because the other side will see your attitude and think, "this person is not making any concessions whatsoever, and it seems like even though the social pendulum is already favorable to them, they'll keep pushing against us!"

So, to de-escalate these dynamics, it seems valuable to acknowledge the values that are at stake for both sides, even just to flag that you're not in favor of pushing the pendulum as far as possible. 

For instance, maybe this would already feel more relaxed if the side that is concerned about losing what's valuable regarding "truth-seeking" can acknowledge that there is a bar also for them, that, if they thought they were dealing with people full of hate or people who advocate for views that predictably cause harm to others (while being aware of this but advocating for those views because of a lack of concern for the affected others), the "truth-seeking" proponents will indeed step in and not tolerate it. Likewise, the other side could maybe acknowledge that it's bad when people get shunned just based on superficial associations/vibes (to give an example of something that I think is superficial: saying "sounds like they're into eugenics" as though this should end the discussion, without pointing out any way in which what the person is discussing is hateful, lacks compassion, or is otherwise likely to cause harm). This is bad not just for well-intentioned individuals who might get unfairly ostracized, but also bad for discourse in general because people won't speak their minds any longer.

Well said.

I meant to say the exact same thing, but seem to have struggled at communicating.

I want to point out that my comment above was specifically reacting to the following line and phrasing in timunderwood's parent comment:

I also have a dislike for excluding people who have racist style views simply on that basis, with no further discussion needed, because it effectively is setting the prior for racism being true to 0 before we've actually looked at the data.

My point (and yours) is that this quoted passage would be clearer if it said "genetic group differences" instead of "racism."

I agree with this diagnosis of the situation. At the same time, I feel like it's the wrong approach to make it a scientific proposition whether racism is right or not. It should never be right, no matter the science. (I know this is just talking semantics, but I think it adds a bunch of moral clarity to frame it in this way, that science can never turn out to support racism.) As I said here, the problem I see with the HBD crowd is that they think their opinions on the science justifies certain other things or that it's a very important topic.

I agree the article was pretty bad and unfair, and I agree with most things you say about cancel culture.

But then you lose me when you imply that racism is no different than taking one of the inevitable counterintuitive conclusions in philosophy thought experiments. (I've previously had a lengthy discussion on this topic in this recent comment thread.)

If I were an organizer of a conference where I wanted having interesting and relevant ideas being discussed, I'd still want there to be a bar for attendees to avoid the problem Scott Alexander pointed out (someone else recently quoted this in this same context, so hat tip to them, but I forget the name of the person): 

The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.

I'd be in favor of having the bar be significantly lower than many outrage-prone people are going to be comfortable with, but I don't think it's a great idea to have a bar that is basically "if you're interesting, you're good, no matter what else."

In any case, that's just how I would do it. There are merits to having groups with different bars.

(In the case of going for a very low one, I think it could make sense to think about the branding and whether it's a good idea to associate forecasting in particular with a low filter.)

Basically, what I'm trying to say is I'd like to be on your side here because I agree with many things you're saying and see where you're coming from, but you're making it impossible for me to side with you if you think there's no difference between biting inevitable bullets in common EA thought experiments vs "actually being racist" or "recently having made incredibly racist comments."

I don't think I'm using the adjective 'racist' here in a sense that is watered down or used in an inflationary sort of way; I think I'm trying to be pretty careful about when I use that word. FWIW, I also think that the terminology "scientific racism" that some people are using is muddling the waters here. There's a lot of racist pseudoscience going around, but it's not the case that you can say that every claim about group differences is definitely pseudoscience (it would be a strange coincidence if all groups of all kinds had no statistical differences in intelligence-associated genes). However, the relevant point is group differences don't matter (it wouldn't make a moral difference no matter how things shake out because policies should be about individuals and not groups) and that a lot of people who get very obsessed with these questions are actually racist, and the ones who aren't (like Scott Alexander, or Sam Harris when he interviewed Charles Murray on a podcast) take great care to distance themselves from actual racists in what they say about the topic and what conclusions they want others to draw from discussion of it. So, I think if someone were to call Scott Alexander and Sam Harris "scientifically racist," then that seems like it's watering down racism discourse because I don't think those people's views are morally objectionable, even though it is the case that many people's views in that cluster are morally objectionable.

I think generally though it's easy to misunderstand people, and if people respond to clarify, you should believe what they say they meant to say, not your interpretation of what they said.

Depends on context. Not (e.g.) if someone has a pattern of using plausible deniability to get away with things (I actually don't know if this applies to Hanania) or if we have strong priors for suspecting that this is what they're doing (arguably applies here for reasons related to his history; see next paragraph).

If someone has a history of being racist, but they say they've changed, it's IMO on them to avoid making statements that are easily interpreted as incredibly racist. And if they accidentally make such an easily misinterpretable statement, it's also on them to immediately clarify what they did or didn't mean. 

Generally, in contexts that we have strong reason to believe that they might be adversarial, incompetence/stupidity cannot be counted continuously as a sufficient excuse, because adversaries will always try to claim it as their excuse, so if you let it go through, you give full coverage to all malefactors. You need adversarial epistemology. Worst-case scenario, you'll judge harshly some people who happen to merely be incompetent in ways that, unfortunately, exactly help provide cover to bad actors. But [1] even though many people make mistakes or can seem incompetent at times, it's actually fairly rare that incompetence looks exactly the same as what a bad actor would do for more sinister, conscious reasons (and then claim incompetence as an excuse), and [2], sadly enough, a low rate of false positives seems the lesser evil here for the utilitarian calculus because we're in an adversarial context where harms conditional on being right are asymmetrically larger than harms on being wrong. (Of course, there's also an option like "preserve option value and gather further info," which is overall preferable, and I definitely like that you reached out to Hanania in that spirit. I'm not saying we should all have made up our minds solely based on that tweet; I'm mostly just saying that I find it pretty naive to immediately believe the guy just because he said he didn't mean it in a racist way.) 

I made the following edit to my comment above-thread:

[Edit: To be clear, by "HBD crowd" I don't mean people who believe and say things like "intelligence is heritable" or "embryo selection towards smarter babies seems potentially very good if implemented well." I thought this was obvious, but someone pointed out that people might file different claims under the umbrella "HBD".]

I'm not sure this changes anything about your response, but my perspective is that a policy of "let's not get obsessed over mapping out all possible group differences and whether they're genetic" is (by itself) unlikely to start us down a slippery slope that ends in something like Lyssenkoism.

For illustration, I feel like my social environment has tons of people with whom you can have reasonable discussions about e.g., applications of embryo selection, but they mostly don't want to associate with people who talk about IQ differences between groups a whole lot and act like it's a big deal if true. So, it seems like these things are easy to keep separate (at least in some environments).

Also, I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think of, and then announcing with loudspeakers that you're ready to die on that hill. (In a way, that sort of behavior would even send an "untrue" [in a "misdirection" sense discussed here] signal: Usually people die on hills that are worthy causes. So, if you're sending the signal "group differences discourse is worth dying over," you're implicitly signalling that this is an important topic. But, as I argued, I don't think it is, and creating an aura of it being important is part of what I find objectionable and where I think the label "racist" can be appropriate, if that's the sort of motivation that draws people to these topics. So, even in terms of wanting to convey true things, I think it would be a failure of prioritization to focus on this swamp of topics.)

I'm personally very turned off by the HBD crowd.

[Edit: To be clear, by "HBD crowd" I don't mean people who believe and say things like "intelligence is heritable" or "embryo selection towards smarter babies seems potentially very good if implemented well." I thought this was obvious, but someone pointed out that people might file different claims under the umbrella "HBD".]

For me, it's not necessarily because I think they're wrong about most factual claims that they're making.

Instead, I'm turned off by the attitude of these being important questions to focus intellectual pursuits on. The existence and origin of group differences seem to me obviously not of great practical importance, so I feel like when people obsess over this, I'm suspicious that it's coming either from a place of edginess/wanting to feel superior to those who "cannot face the truth", or (worse) a darker place of entitlement and wanting to externalize bad feelings about one's own life by blaming some outgroup that has received "undeserved" support.

When thinking about how to make the world better for humans (excluding non-human animals for the moment), I see basically three major cause areas (very simplified):

(1) Evidence-based, immediate-outcome-focused interventions that improve things on some legible metric, like school attendance, medicines successfully administered, etc.

(2) Longer-term structural reform via politics.

(3) Focusing on technological breakthroughs and risks that either improve or worsen things for everyone.

If someone is interested in (1), HBD doesn't change anything about evidence-based progress on legible metrics. We'd continue to want to support evidence-based interventions in all kinds of contexts that make things better for individuals on some concrete variables. (The focus on evidence-based metrics is great because it helps us sideline a lot of politics-inspired storytelling that turns out to be wrong, such as the claim that poor people will make poor choices if you give them money [GiveDirectly example].)

If someone is interested in (3), they'll hopefully understand that a lot of things that are pressing problems today will either no longer matter in 1-20 years because we're all dead, or they'll be more easily solvable with the help of aligned powerful AIs and radical technologically-aided re-structuring of society.

Lastly, if someone is interested in (2), then good luck: It seems like the EA community has failed to find convincing interventions in this area. If you know of some intervention that would be extremely cost-effective, but in-your-opinion false beliefs about HBD are the only crux that stands in the way from us doing the intervention, then that would sound interesting to talk about. But this isn't the case, is it? I think structural reform is intrinsically hard.

I can see how HBD questions might have some tangential relevance for policy reform, but emphasis on tangential, and I also think that we're so far away from doing sensible things under (2) that this seems unlikely to be an important crux. (Also, if I were to prioritize something in this space, it would be meta-level interventions like improving the news landscape.)

In this context of structural reform, I should flag that I'm also very much against wokeism, and I agree that there are parallels to Lysenkoism. But I don't think "being against wokeism" implies "we should be interested in HBD questions." In fact, I think I am against both of these for related reasons. I think it's often not productive to view everything in terms of "group vs group." I think we should spend resources on causes where we can point to concrete benefits for individuals, no matter their group. There's so much to do on that front already so that other things feel like a bit of a distraction, both in general, but also especially when considering the mind-killing effects of political controversies.

So, to summarize, your comment about HBD being important seems very wrong to me.

Edit: I guess a steelman of your point is that you're not necessarily saying HBD is in itself important, you're just saying it would be bad to actively deny it (presumably because this would lend momentum to wokeism or new types of Lyssenkoism). I have more sympathies with that, but the way I see it, it's more like maybe HBD and wokeism are two sides of a toxic dynamic where it would be better if we could get back to other concerns.

The point I wanted to make in the short form was directed at a particular brand of skeptic. 

When I said,

Something has gone wrong if people think pausing can only make sense if the risks of AI ruin are >50%.

I didn't mean to imply that anyone who opposes pausing would consider >50% ruin levels their crux.

Likewise, I didn't mean to imply that "let's grant 5% risk levels" is something that every skeptic would go along with (but good that your comment is making this explicit!). 

For what it's worth, if I had to give a range for how much I think people I, at the moment, epistemically respect to the highest extent possible, can disagree on this question today (June 2024), I would probably not include credences <<5% in that range (I'd maybe put it a more like 15-90%?). (This is of course subject to change if I encounter surprisingly good arguments for something outside the range.) But that's a separate(!) discussion, separate from the conditional statement that I wanted to argue for in my short form. (Obviously, other people will draw the line elsewhere.)

On the 80k article, I think it aged less well than what one maybe could've written at the time, but it was written at a time when AI risk concerns still seemed fringe. So, just because it in my view didn't age amazingly doesn't mean that it was unreasonable at the time. At the time, I'd have probably called it "lower than what I would give, but seems within the range of what I consider reasonable."

Yeah, I agree. I wrote about timing considerations here; I agree this is an important part of the discussion.

Load more