T

timunderwood

944 karmaJoined

Comments
118

Opinions that are stupid are going to be clearly stupid.

So the thing is, racism is bad. Really bad. It caused Hitler. It caused slavery. It caused imperialism. Or at least it was closely connected.

The holocaust and the civil rights movement convinced us all that it is really, really bad.

Now the other thing is that because racism is bad, our society collectively decided to taboo and call horrible arguments that racists make and use.

The next point I want to make is this: As far as I know the science about race and intelligence is entirely about figuring out causation from purely observational studies when you have only medium sized effects.

We know from human history and animal models that both genetic variation and the cultural forces are powerful enough to create the observed differences.

So we try to figure out which one it is using these observational studies on a medium sized effect (ie way smaller than smoking and lung cancer, or stomach sleeping and SIDS). Both causal forcesnl are capable of producing in principle the observed outcomes.

You can't do it. Our powers of causal inference are insufficient. It doesn't work.

What you are left with is your prior about evolution, about culture, and about all sorts of other things. But there is no proof in either direction.

So this is the epistemic situation.

But because racism as bad, society, and to a lesser extent the scientific community, has decided to say that attributing any major causal power to biology in this particular is disproven pseudoscience.

Some people are good at noticing when the authorities around them and their social community and the people on their side are making bad arguments. These people are valuable. They notice important things. They point out when the emperor has no clothes. And they literally built the EA movement.

However, this ability to notice when someone is making a bad argument doesn't turn off just because the argument is being made for a good reason.

This is why people who are good at thinking precisely will notice that society is saying that there is no genetic basis for racial differences in behavior with way, way more confidence than is justified by the evidence presented. And because racism is a super important topic in our society, most people who think a lot will think hard about it at some point in their life.

In other words, it is very hard to have a large community of people who are willing to seriously consider that they personally are wrong about something important, and that they can improve, without having a bunch of people who also at some point in their lives at least considered very hard whether particular racist beliefs are actually true.

This is also not an issue with lizard people or flat earthers, since the evidence for the socially endorsed view is really that good in the latter case, and (so far as I have heard, I have in no way personally looked into the question of lizard people running the world, and I don't think anyone I strongly trust has either, so I should be cautious about being confident in its stupidity) the evidence for the conspiracy theory is really that bad.

This is why you'll find lots of people in your social circles who can be accused of having racist thoughts, and not very many who can be accused of having flat earth thoughts.

Also, if a flat earther wants to hang out at an ea meeting, I think they should be welcomed.

"the common core seems to be ~ we object to extending special-guest and/or speaker status to certain individuals at an EA-adjacent conference. I'm struggling to understand how that assertion strongly implies stuff like "pushing society in a direction that leads to" McCarthyism, embracing "cancel culture" norms more generally, or not "allow[ing adults] to read whichever arguments they are interested in about controversial topics . . . ." For example, I don't recall seeing anyone here say that Hanania et al. should get canceled by whoever is hosting their websites, that they should lose their jobs, etc. (although I don't recall every single comment)."

 

So I certainly pattern match the things being said in this discussion as the things said by people who want to get Substack to remove Hanania, want people with his opinions who have a normal employer to lose their jobs, and then after they have lost their jobs, they want to have the financial system refuse to process payments to them by someone who wants to help them survive now that they've lost their job, since after all it is important to stop people from funneling money to Nazis.

I can't speak for everyone, but I think the crux is that I tend to think the objectors are actually in the first camp, and that they need to be fought on that basis. And so moving forward towards agreement would creating trust that the objectors actually aren't.

But I think there is also an important difference on the question of what it means to invite someone as a speaker -- ie does it mean that you are endorsing in some sense what they say, or are you just saying that they are someone that enough attendees will find interesting to make it worth giving them a speaking slot.

A culture in which we try to stop people from getting a chance to listen to people who they find interesting, because we dislike things they believe, seems to me to be the essence of the thing I think is bad. Giving someone an opportunity to speak is not endorsement in my head, and it is a very bad norm to treat it like it is.

This also, incidentally, is where the people running Manifest were coming from: They fundamentally don't see inviting Hanania as endorsing his most controversial views, and they certainly don't see it as endorsing the views he held in his twenties that he now loudly claims to reject.

While the deplatforming side seems to think that a culture where people who believe bad things are given platforms to speak just because the people deciding who will speak think they are interesting is terrible because it is implicitly endorsing the bad things they believe.

To give a different example, if I was running a major EA event, and I could get Emile Torres to speak at it, I definitely would, even though I think he is often arguing in bad faith, and even though I vehemently disagree with both much of his model of the world and the values he seems to espouse. I think enough people would find him interesting enough to be worth listening to, so it makes sense to 'extend him special-guest/or speaker status'.

My view on this is that, unless there is some really strong argument against HBD type views that is not regularly being made by the people arguing that HBD type people are evil, we have in this case a dubious but plausible proposition (HBD) where the strength of the social consensus against it has gotten way, way stronger than the evidence against it.

People who are good at noticing holes in arguments are going to notice that the common arguments saying that HBD style ideas are obviously and completely false have lots of holes in them. Some of these people will then have a period where they think HBD is probably true before (possibly) they notice the holes that also exist in the arguments for HBD.

In this context it is pretty likely that 'being good at noticing holes in arguments that your social group strongly endorses' is going to associate with a tendency to 'racism'.

 

I also have a dislike for excluding people who have racist style views simply on that basis, with no further discussion needed, because it effectively is setting the prior for racism being true to 0 before we've actually looked at the data.

Make the argument on the merits for why they are bad scholars making provably false arguments, like we do with creationists, anti-vaxxers, and 9-11 truthers, or let them talk. Trying to convince me to not listen to Hanania without establishing that what he says is not connected to reality feels to me like you are trying to make me have stupider beliefs because it is politically convenient for you. 

That feeling, like you are treating me as a child who needs to be given false stories so I do the right thing, is probably behind a huge portion of the rationalist communities commitment to not excluding people.

Of course the story in the head of the anti racist is that they are stopping bad things from happening, and they are acting to prevent things like slavery, the holocaust, and Jim Crow from occurring, and that by excluding racists they are working to create a world where current systematic injustices get corrected.

It is possible that this consequentialist argument is correct, but it has nothing to do with epistemics, and simply making it means that you are (at this location) valuing consequences over truth.

Which of course (almost) everyone does sometimes. There are groups (both hypothetical and real) whose speech I'd like to suppress. This is a paradox in my thinking that I feel uncomfortable about, but it is there. 

This isn't good. This really isn't good.

Because I want to avoid the whole thing, and I am far less attached to EA because of these arguments, while being on the opposite side of the political question from where I assume you are.

Anyways, I'd call this weak evidence for the 'EA should split into rationalist!EA, normie!EA'

Intuitively it seems likely that it would be better for the movement though if only people from one side were leaving, rather than the controversies alienating both camps from the brand.

Again 'order of magnitude' is a very clear mathematical concept. I think what you mean is that orders of magnitude equivalent from unhobbling is a made up thing, and that there is a story being told about increases in compute / algorithmic efficiency which might not match what will happen in the real world, and where the use of this concept is part of an exercise in persuasive story telling

This comment completely ignores all of the good and strong, and highly compelling points you no doubt made in this post, that I didn't read.

Calling OOM a 'made up unit', as someone who has spent a lot of time around math and who perfectly well understands what it means -- ie 'an order of magnitude increase in something', or alternatively 'the number of OOMs is the number of times a value increases by roughly 10x', suggests to me that you are either not terribly technical and mathy, or that you are focused on the sort of nitpicking that obscures your actual counter argument rather than making it clear.

I of course could be wrong on those points. I don't know. I didn't read the essay. But even if I am wrong, it was probably a bad move to argue that a turn of phrase which is both clear, and in fairly common usage in technical spaces is a term whose use should make me think less of an author.

I think 'robustly' does enough work to make item 3 also pretty uncertain for at least a lot of people.

Nitpicky reply, but reflecting an attitude that I think has some value to emphasize:

Based on what you wrote, I think it would be far more accurate to describe GBD as 'robust enough to be an useful tool for specific purposes', rather than 'very robust'.

I think what this means in part is that we need to also work to create institutions that are actually trustworthy around ai.

I'm probably further from the problem than you, but it is a kind of silly projection in a different way, because it also has embedded in it a reason why there is no chance that current methods will be scaled up -- they are too expensive. The far higher carbon cost implies a far larger amount of energy and other resources are used also.

Load more