Z

ZachWeems

98 karmaJoined

Posts
1

Sorted by New
4
· · 1m read

Comments
31

Un-endorsed for two reasons. 

  • Manifold invited people based on having advocated for prediction markets, which is a much stricter criterion than being a generic public speaker that feels positively about your organization. With a smaller pool of speakers, it is not trivially cheap to apply filters, so it is not as clear cut as I claimed. (I could have found out this detail before writing, and I feel embarrassed that I didn't.)
  • Despite having an EA in a leadership role and ample EA-adjacent folks that associate with it, Manifold doesn't consider itself EA-aligned. It sucks that potential EA's will sometimes mistake non-EA's for EA's, but it is important to respect it when a group tells the wider EA community that we aren't their real dad and can't make requests. (This does not appear to have been common knowledge so I feel less embarrassed about this one.)

Imagine a hundred well-intentioned people look into whether there are dragons. They look in different places, make different errors, and there are a lot of things that could be confused for dragons or things dragons could be confused for, so this is a noisy process. Unless the evidence is overwhelming in one direction or another, some will come to believe that there are dragons, while others will believe that there are not.

While humanity is not perfect at uncovering the truth in confusing situations, our approach that best approaches the truth is for people to report back what they've found, and have open discussion of the evidence. Perhaps some evidence A finds is very convincing to them, but then B shows how they've been misinterpreting it.

This is a bit discourteous here.

I am not claiming that A is convincing to me in isolation. I am claiming that after a hundred similarly smart people fit different evidence together, there's so much model uncertainty that I'm conservatively downgrading A from "overwhelmingly obvious" to "pretty sure". I am claiming that if we could somehow make a prediction market that would resolve on the actual truth of the matter, I might bet only half my savings on A, just in case I missed something drastic.

You're free to dismiss this as overconfidence of course. But this isn't amateur hour, I understand the implications of what I'm saying and intend my words to be meaningful.

Many sensible people have (what I interpret as) @NickLaing's perspective, and people with that perspective will only participate in the public evidence reconciliation process if they failed to find dragons. I don't know, for example, whether this is your perspective.

You wrote essentially the opposite... and I agree some people will think this way, but I think this is many fewer people than are willing to publicly argue for generally-accepted-as-good positions but not generally-accepted-as-evil ones

I think this largely depends on whether a given forum is anonymous or not. In an alternate universe where the dragon scenario was true, I think I'd end up arguing for it anonymously at some point, though likely not on this forum. 

I was not particularly tracking my named-ness as a point of evidence, except insofar as it could be used to determine my engagement with EA & rationality and make updates about my epistemics & good faith.

Good faith participation in a serious debate on the existence of dragons risks your reputation and jeopardizes your ability to contribute in many places.

Sure. I understand it's epistemically rude to take debate pot-shots when an opposing team would be so disadvantaged, and there's a reason to ignore one-sided information. There's no obligation to update or engage if this comes across as adversarial.

But I really am approaching this as cooperatively communicating information. I found I had nonzero stress about the perceived possibility of dragons here, and I expect others do as well. I think a principled refusal to look does have nonzero reputational harm. There will be situations where that's the best we can manage, but there's also such a thing as a p(dragon) low enough that it's no longer a good strategy. If it is the case that there are obviously no dragons somewhere, it'd be a good idea for a high-trust group to have a way to call "all clear".

So this is my best shot. Hey, anyone reading this? I know this is unilateral and all, but I think we're good.

Agreed.

I think I wasn't entirely clear; the recommendation was that if my claim sounded rational people should update their probability, not that people should change their asymmetric question policy. Edited a bit to make it more clear.

For simplicity I'll put aside some disagreements about which spaces are rationalist, and assume we'd agree on what lines we'd like to draw.

I think you're assuming a higher level of control among rationalists than what actually exists. "Must" requires "can", and I don't think we have much of that.

If he wanted to, Scott could take a harsher moderation policy on his substack. I'd like it if he did. Frankly, some of the commenters are mindkilled morons, and my impression is there were less of those in the SSC days. But at the end of the day it's his decision, and there's no organization within the rationality community that could even say "he's wrong to leave it like that" without it being a huge overreach. Similarly for whoever controls the ACX subreddit- I suppose you could try to convince the mods to run it like LW, but they'd be unlikely to change their mind, and if they did the most likely result would be the mindkilled types going off to make an "ACX 2" subreddit.

Even more so with Twitter and in-person communications. Indicating or pattern-matching an association with rationalists does not give other rationalists any say over a misbehaving person.

oThis isn't directly responsive to your comment but- I've gone to that particular edge of the map and poked around a bit. I think people who avoid looking into the question for the above reason typically sound like they expect that there plausibly be dragons. This is a PSA that I saw no dragons, so the reader should consider the dragons less plausible.

There certainly are differences in individual intelligence due to genetics. And at the species level, genes are what cause humans to be smarter than, say, turtles. It's also true that there's no law of reality that prevents unfortunate things like one group of sapients being noticeably smarter than another due to genetics. However, I'm pretty sure that this is not a world where that happened with continent-scale populations of homo sapiens[1]. I think it's more likely that the standard evidence presented in favor instead indicates psychiatrists' difficulty in accounting for all non-genetic factors.

I don't mean to argue for spending time reading about this. The argument against checking every question still applies, and I don't expect to update anyone's expectations of what they'd find by a huge amount. But my impression is people sound like their expectations are rather gloomy[2]. I'd like to stake some of my credibility to nudge those expectations towards "probably fine".

  1. ^

    I feel like I ought to give a brief and partial explanation of why: Human evolutionary history shows an enormous "hunger" for higher intelligence. Mutations that increase intelligence with only a moderate cost would tend to rapidly spread across populations, even relatively isolated ones, much like lactose tolerance is doing. It would be strange this pressure dropped off in some locations after human populations diverged.

    It's possible that there were differing environmental pressures that pushed different tradeoffs over aspects of intelligence. Eg, perhaps at very high altitudes it's more favorable to consider distant dangers with very thorough system-2 assessments, and in lowlands it's better to make system-2 faster but less careful. However at the scale corresponding to the term "race" (ie roughly continent-scale), I struggle to think of large or moderate environmental trends that would affect optimal cognition style. Whereas continent-scale trends that affect optimal skin pigments are pretty clear.

    Adding to this, our understanding of genetics is rapidly growing. If there was a major difference in cognition-affecting mutations corresponding to racial groupings, I'd have bet a group of scientists would have stumbled on them by now & caused an uproar I'd hear about. As time goes on the lack of uproars is becoming stronger evidence.

  2. ^

    I suspect this is due to a reporting bias by non-experts that talk about this question. Those who perceive "dragons on the map" will often feel their integrity is at stake unless they speak up. Those who didn't find any will lose interest and won't feel their integrity is at stake, so they won't speak up. So people who calmly state facts on the matter instead of shouting about bias are disproportionately the ones convinced of the genetic differences, which heuristically over-weights their position.

Clarifying for forum archeologists: "traditionalist" in Catholicism refers to people who consider the theological claims and organizational changes in Vatican II to be illegitimate, or at minimum taken too far. Catholics who consider the Church to have divinely guided authority over religious and moral truths will sometimes call themselves "orthodox" (lowercase) Catholics, to distinguish themselves from those who don't accept this & from traditionalists who accept everything up to Vatican II.

So, ozymandias intended to indicate "Davis accepts the Vatican's teaching on sin, hell, sexual mores, etc". Davis objected to an adjective that implied he rejects Vatican II.

I'm inclined to write defenses of views in the latter paragraph:

  • My read (I admit I skimmed) is that Scott doesn't opine because he is uncertain whether there is a large scale reproduction-influencing program that would be a good idea in a world without GE on the horizon, not that he has a hidden opinion about reproduction programs we ought to be doing despite the possibility of GE.
  • I don't think the mere presence of a "dysgenic" discussion in a Bostrom paper merits criticism. Part of his self-assigned career path is to address all of the X-risks. This includes exceedingly implausible phenomena such as demon-summoning, because it's probably a good idea for one smart human to have allocated a week to that disaster scenario. I don't think dysgenic X-risks are obviously less plausible than demon-summoning, so I think it's a good idea someone wrote about it a little.
  • The article on this forum originated as a response to Torres' hyperbolic rhetoric, and primarily defends things that society is already doing such as forbidding incest.
  • Singer's argument, if I remember correctly, does not involve eugenics at all. It involves the amount of enjoyment occurring in a profoundly disabled child vs a non-disabled child, and the effects on the parents, but not the effect on a gene pool. I believe the original actually indicated severe disabilities that are by their nature unlikely to be passed on (due to lethality, infertility, incompatibility with intercourse, or incompatibility with consent), so the only impact would be to add a sibling to the gene pool who might be a carrier for the disability.

I don't know that "extremist" is a good characterization of FTX & Alameda's actions.

Usually "extremist" implies a willingness to take highly antisocial actions for the sake of an extreme ideology.

It's fair to say that trying to found a billion dollar company with the explicit goal of eventually donating all profits is an extreme action. It's highly unusual and goes much further with specific ideas than most adherents do. But unless one is taking a very harsh stance against capitalism (or against cryptocurrency), it's hard to call this action highly antisocial just yet. The antisocial bit comes with the first fraudulent action taken.

A narrative I keep seeing is that Sam and several others thought that not only the longstanding arguments against robbing banks to donate to charity are flawed, but in fact they should feel ok robbing customers who trusted them in order to get donation funds.

If someone believed this extreme-ified version of EA and so they committed fraud with billions of dollars, that would be extremist. But my impression is- whether it started as a grievous accounting flaw, a risky conspiracy between amphetamine fueled manics, or something else- the fraud wasn't a result of people doing careful math, sleeping on it, and ultimately deciding it was net positive. It involved irrational decisions. (This is especially clear by the end. I'd need to refresh my memory to talk specifics, but I think in the last months SBF was making long-term illiquid investments that made it even less plausible they could have avoided bankruptcy, and that blatantly did not increase EV even from a risk-neutral perspective.) 

If the fraud was irrational regardless of whether their ideology was ok with robbery, then in my view there's little evidence ideology caused the initial decision to commit fraud.

Instead the relevant people did an extreme action, and then made various moral and corporate failures typical of white collar crime, which were antisocial and went against their ideology. 

Regarding the last paragraph, in the edit:

I think the comments here are ignoring a perfectly sufficient reason to not, eg, invite him to speak at an EA adjacent conference. If I understand correctly, he consistently endorsed white supremacy for several years as a pseudonymous blogger.

Effective Altruism has grown fairly popular. We do not have a shortage of people who have heard of us and are willing to speak at conferences. We can afford to apply a few filtering criteria that exclude otherwise acceptable speakers. 

"Zero articles endorsing white supremacy" is one such useful filter. 

I predict that people considering joining or working with us would sometimes hear about speakers who'd once endorsed white supremacy, and be seriously concerned. I'd put not-insignificant odds that the number that back off because of this would reduce the growth of the movement by over 10%. We can and should prefer speakers who don't bring this potential problem.

 

A few clarifications follow:

-Nothing about this relies on his current views. He could be a wonderful fluffy bunny of a person today, and it would all still apply. Doesn't sound like the consensus in this thread, but it's not relevant.

-This does not mean anyone needs to spurn him, if they think he's a good enough person now. Of course he can reform! I wouldn't ask that he sew a scarlet letter into his clothing or become unemployable or be cast into the outer darkness. But, it doesn't seem unreasonable to say that past actions as a public thinker can impact your future as a public thinker. I sure hope he wouldn't hold it against people that he gets fewer speaking invitations despite reforming.

-I don't see this as a slippery slope towards becoming a close-minded community. The views he held would have been well outside the Overton window of any EA space I've been in, to the best of my knowledge. There were multiple such views, voiced seriously and consistently. Bostrom's ill-advised email is not a good reason to remove him from lists of speakers, and Hanania's multi-year advocacy of racist ideas is a good reason. There will be cases that require careful analysis, but I think both of these cases are extreme enough to be fairly clear-cut.

[This comment is no longer endorsed by its author]Reply

The agree-votes have pretty directly proven you correct.

Load more