Hide table of contents

I work for CEA, but these are my personal views. 

Relevant background: I previously co-founded two EA groups, at Yale University and the healthcare corporation Epic. In one case, I had to make a decision about how to handle a potential guest speaker who was also a controversial figure; this is part of why I sympathize with EA Munich’s position, though a small part.

Epistemic status: A lot of pent-up venting, which I hope adds up to something moderately reasonable. But I wouldn’t be too surprised if it doesn’t.


Many things can be true at the same time.

A planned EA Munich event with Robin Hanson was recently cancelled. This is EA Munich’s explanation. This is a Twitter thread with lots of reactions.

For context, I’ll start with a factual clarification, based on conversations with others at CEA (all of this is also detailed in the Munich group’s document):

  • When the Munich organizers got in touch with CEA, they were already considering whether to cancel the event.
  • CEA staff told the organizers that they didn’t see a clear-cut “right decision,” and that it could be reasonable to cancel or not cancel the event. Most of CEA’s engagement with the Munich group on this matter involved thinking through ways to handle conflict that could arise from the event, rather than ways to cancel it.
  • The organizers then held a vote among themselves and decided to cancel.

Here are some things about the situation which seem true to me (though this doesn’t necessarily make them true):

On the decision and ensuing social media kerfuffle

  1. It is generally good for groups interested in finding good ideas to choose speakers on the basis of the quality of their best ideas, rather than their most controversial or misguided ideas.
  2. However, if most members of a small group don’t want a speaker to present to their group, this is a good reason for that speaker not to present. The smaller the group, the more true this seems. (If a speaker is disinvited from an event at a large university, thousands of supporters might be left disappointed; this isn’t the case for a tiny event run by a local EA group.)
  3. The Slate piece cited as criticism of Hanson was uncharitable; reading it would probably leave most people with a different view of Hanson than they’d get from reading a wider selection of Hanson’s work.
  4. And yet, many people are actually uncomfortable with Hanson for some of the same reasons brought up in the Slate piece; they find his remarks personally upsetting or unsettling.
    • It’s unclear how many members/organizers of the Munich group were personally upset/unsettled by Hanson and how many were mostly concerned with the PR implications of his presence, but it seems likely that both groups were represented.
  5. Those who commented on the announcement were generally quite uncharitable to EA Munich — including people I’m certain would endorse the Principle of Charity in the abstract if I were to ask them about it independent of this context. Reading Hanson’s tweets likely left them with a very different view of EA Munich than they’d get from attending a few meetups.
  6. I wasn’t involved in CEA’s discussion with EA Munich, but CEA giving them the go-ahead to make their own decision seems correct.
    • I don’t think Hanson’s supporters would actually have wanted CEA to say: “You should run the event even if it feels like the wrong decision.”
    • Maybe they would have wanted CEA to say: “You should do what seems best, but keep in mind the negative consequences of deplatforming speakers.” But EA Munich was clearly aware of the negative consequences. What could CEA tell them that they didn’t know already, aside from “we trust you to make a decision”?
  7. There are ways in which EA Munich could have adjusted their announcement to better communicate their reasoning.
  8. There are many ways in which the EA Munich announcement is much, much better than other announcements of its type produced by institutions with far more power, prestige, and PR experience.
  9. Writing an announcement that has to be approved by eight people (all volunteers), involves a sensitive topic, and has to be published quickly… is something I wouldn’t wish on anyone. Be kind.

On Robin Hanson

  1. Based on my reading of some of Hanson’s work, I believe he cares a lot about the world being a better place and people living better lives, whoever they are. He is the respected colleague of several of my favorite bloggers. I’d probably find him an interesting person to eat lunch with.
  2. Much of Hanson’s writing (as EA Munich pointed out themselves!) is interesting and valuable. And some writing that doesn’t seem interesting or valuable to me is clearly interesting or valuable to other people, which probably means that I’m underestimating the total value of his output.
  3. Some of Hanson’s writing has probably been, on net, detrimental to his own influence. Had he chosen not to publish that writing (or altered it, gotten more feedback before publishing, etc.), his best and most important ideas would have a better chance of improving the world. Instead, much of the attention he gets involves ideas which I doubt he even cares about very much (though I don’t know Hanson, and this is just a guess).
  4. But as I said, many things can be true at the same time. There is something to the argument that an ideal scholarly career will involve some degree of offense, because filtering all of one’s output takes a lot of time and energy and will produce false positives. “If you never make people angry, you’re spending too much time editing your work.
  5. Still, many other scholars have done a better job than Hanson at presenting controversial ideas in a productive way. (Several of them work in his academic department and have written thousands of blog posts on varied topics, many of them controversial.)
  6. To the extent that I support some of Hanson’s ideas and want to see them become better-known, I am annoyed that this may be less likely to happen because of Hanson’s decisions. (Though maybe the controversies lead more people to his good ideas in a way that is net positive? I really don't know.)
  7. And of course, Hanson's approach to his own work is none of my business, and he can write whatever he wants. I just have a lot of feelings.

On the EA movement’s approach to ideas, diversity, etc.

  1. EA Munich’s decision doesn’t say much, if anything, about EA in general. They are a small group and acted independently.
  2. That said, my impression is that, over time, the EA movement has become more attentive to various kinds of diversity, and more cautious about avoiding public discussion of ideas likely to cause offense. This involves trade-offs with other values.
  3. However, these trade-offs could easily be beneficial, on net, for the movement’s goals.
    • Whether they actually are depends on many factors, including what a given person would define as “the movement’s goals.” Different people want EA to do different things! Competing access needs are real!
  4. Some of the people who have encouraged EA to be more attentive to diversity and more cautious about public discussion did so without thinking carefully about trade-offs.
  5. Some of the people who have encouraged EA not to become more cautious and attentive to diversity… also did so without thinking carefully about trade-offs.
  6. Given prevailing EA discussion norms, I would expect people who favor more attentiveness to diversity to be underrepresented in community discussions, relative to their actual numbers. My experience running anonymous surveys of people in EA (Forum users, org employees, etc.) tends to bear this out.
    • However, underrepresentation isn’t exclusive to this group. I’ve heard from people with many different views who feel uncomfortable talking about their views in one or more places.
  7. The more time someone spends talking to a variety of community members (and potential future members), the more likely they are to have an accurate view of which norms will best encourage the community’s health and flourishing. Getting a sense of where the community lies on issues often involves having a lot of private conversations, because people often say more about their views in private than they will in a public forum.
  8. Some of the people who have spent the most time doing the above came to the conclusion that EA should be more cautious and attentive to diversity.
    • I don’t know what the right trade-offs are myself, but I recognize that, compared to the aforementioned people, I have access to (a) the same knowledge about trade-offs and (b) less knowledge about actual people in the community.
    • Hence, I’m inclined to weigh someone’s views more heavily if they’ve spent a lot of time talking to community members.
    • That said (almost done), I spoke to some of the aforementioned people, who cautioned me not to defer too much to their views, and pointed out that “opinions about diversity” aren’t necessarily correlated with “time spent talking to community members,” presenting me with examples of other frequent conversation-havers who hold very different opinions.
      • This drives home for me how open these kinds of questions are — and how wrongfooted it seems when people present EA or its biggest orgs as some kind of restrictive orthodoxy.

48

0
0

Reactions

0
0

More posts like this

Comments138
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Do you have any thoughts on this earlier comment of mine? In short, are you worried about about EA developing a full-scale cancel culture similar to other places where SJ values currently predominate, like academia or MSM / (formerly) liberal journalism? (By that I mean a culture where many important policy-relevant issues either cannot be discussed, or the discussions must follow the prevailing "party line" in order for the speakers to not face serious negative consequences like career termination.) If you are worried, are you aware of any efforts to prevent this from happening? Or at least discussions around this among EA leaders?

I realize that EA Munich and other EA organizations face difficult trade-offs and believe that they are making the best choices possible given their values and the information they have access to, but people in places like academia must have thought the same when they started what would later turn out to be their first steps towards cancel culture. Do you think EA can avoid sharing the same eventual fate?

[Tangent:] Based on developments since we last engaged on the topic, Wei, I am significantly more worried about this than I was at the time. (I.e., I have updated in your direction.)

8
ragyo_odan_kagyo_odan
What made you update?

Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think it's very unlikely that "full-scale cancel culture" (I'll use "CC" from here) evolves within EA. 

Some elements of my doubt: 

  • Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
  • EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
    • Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
    • CEA actively shrank the size of EA Global because they thought it would improve the quality of the event.
    • 80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
    • Obviously, none of these are perfect analogies, but I think there's a noteworthy pattern here.
  • The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
  • EA has a strong British influence (rather than being wholl
... (read more)

(I'm occupied with some things so I'll just address this point and maybe come back to others later.)

It seems like the balance of opinion is very firmly anti-CC.

That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:

  1. I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of "EA politics". If there's a danger or trend of EA going in a CC direction, I should be among the last to know.
  2. Until recently I have had very little interest in politics or even socializing. (I once wrote "And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.") Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
  3. I'm probably well within the top percentile of all EAs in terms of "cancel proofness", because I have both an independent source of income and a non-zero amount of "intersect
... (read more)
I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly

I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.

The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. "preference falsification"). That seems to already be the situation today.

It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.

That does seem important. I mostly don't think about this issue because it's not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this iss... (read more)

To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don't recall all that was said, but I think a large part of my argument was that "jumping ship" or being forced off for ideological reasons was not "fine" when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I'm not sure if this changed Paul's mind.

I'm not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).

It doesn't currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.

It does feel like your estimates for the expected harms are higher than mine, which I'm happy enough to discuss, but I'm no... (read more)

I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.

I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people's minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don't want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.

I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it's an important problem that more people should work on. So instead of "and lots of people talk about it already" which seems to suggest that enough people are working on it already, something like "I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere."

Curious how things look from your perspective, or a third party perspective.

Why did it take someone like me to make the concern public?

I don't think it did.

On this thread and others, many people expressed similar concerns, before and after you left your own comments. It's not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was "The Importance of Truth-Oriented Discussions in EA".

While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don't know either way. 

I've also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they'd feel comfortable commenting about their views, though I can't be sure.

I also have direct evidence in the form of EAs contacting me privately to say that they're worried about EA developing/joi

... (read more)
7
Aaron Gertler 🔸
Also: I'd be glad to post something in the EA Polls group I created on Facebook.  Because answers are linked to Facebook accounts, some people might hide their views, but at least it's a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I don't know what wording you'd want around such a question.
1
Max_Daniel
My guess is that your points explain a significant share of the effect, but I'd guess the following is also significant: Expressing worries about how some external dynamic might affect the EA community isn't often done on this Forum, perhaps because it's less naturally "on topic" than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.: * How does US immigration policy affect the ability of US-based EA orgs to hire talent? * How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Ventures's capital might be in Facebook stocks?) Both of these questions seem quite important and relevant, but I recall less discussion of those than I'd have at-first-glance expected based on their importance. (I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldn't immediately find. But I think it's somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)

On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.

That cancellation attempt was clearly a bridge too far. EA Forum is comparatively a bastion of free speech (relative to some EA Facebook groups I've observed and as we've now seen, local EA events), and Scott Alexander clearly does not make a good initial target. I'm worried however that each "victory" by CC has a ratcheting effect on EA culture, whereas failed cancellations don't really matter in the long run, as CC can always find softer targets to attack instead, until the formerly hard targets have been isolated and weakened.

Honestly I'm not sure what the solution is in the long run. I mean academia is full of smart people many of whom surely dislike CC as much as most of us and would push back against it if they could, yet academia is now the top example of cancel culture. What is something that we can do that they couldn't, or didn't think of?

I agree that that was definitely a step too far. But there are legitimate middle grounds that don't have slippery slopes.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.

I think there is a lot of detail and complexity here and I don't think that this comment is going to do it justice, but I want to signal that I'm open to dialog about these things.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

On the face of it, this seems like a bad idea to me. I don't want "introductory" EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn't like the discourse norms in the central EA spaces, I don't want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.

To say it another way, I think it is a mistake to have "advanced" and "introductory" EA spaces, at all.

I am intending to make a pretty strong claim here.

[One operationalization I generated, but want to think more about before I fully endorse it: "I would turn away bil... (read more)

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”

In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.

I agree that one of the th... (read more)

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.

I agree with your conclusion about this instance, but for very different reasons, and I don't think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.

It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be ver... (read more)

3
Elityre
I think this comment says what I was getting at in my own reply, though more strongly.
2
EricHerboso
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm. On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.

Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren't obviously wrong to do so. So signaling the former would be nice.

Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don't have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven't thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying "I think factory farming is terrible but XYZ" instead of just "XYZ".

9
Elityre
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations. [Everything that I say in this comment is tentative, and I may change my mind.] If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I'm inclined to bite the bullet of allowing that sort of conversation. The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn't allow that kind of talk. Namely, that "the Holocaust happened, and Holocaust denial is false". Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas. I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary. If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure. In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.) But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.) ... I
7
EricHerboso
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points: 1. I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions. 2. I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time. I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that t

"I think a model by which people gradually "warm up" to "more advanced" discourse norms is false."

I don't think that's the main benefit of disallowing certain forms of speech at certain events. I'd imagine it'd be to avoid making EA events attractive and easily accessible for, say, white supremacists. I'd like to make it pretty costly for a white supremacist to be able to share their ideas at an EA event.

We've already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don't moderate away or at least discourage such views will tend to attract them - it's not the pattern of activity you'd see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It's also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.

I don't know anything about Leverage but I can think of another situation where someone involved in the rationalist community was exposed as having misogynistic and white supremacist anonymous online accounts. (They only had loose ties to the rationalist community, it came up another way, but it concerned me.)

2
abrahamrowe
I just upvoted this comment as I strongly agree with it, but also, it had  -1 karma with 2 votes on it when I did so. I think it would be extremely helpful for folks who disagree with this, or otherwise want to downvote it, to talk about why they disagree or downvoted it.

I didn't downvote it, though probably I should have. But it seems a stretch to say 'one guy who works for a weird organization that is supposedly EA' implies 'congregation'. I think that would have to imply a large number of people. I would be very disappointed if I had a congregation of less than ten people.

JoshYou also ignores important hedging in the linked comment:

Bennett denies this connection; he says he was trying to make friends with these white nationalists in order to get information on them and white nationalism. I think it's plausible that this is somewhat true.

So instead of saying

We've already seen white nationalists congregate in some EA-adjacent spaces.

It would be more fair to say

We've already seen one guy with some evidence he is a white nationalist (though he somewhat plausibly denies it) work for a weird organization that has some EA links.

Which is clearly much less worrying. There are lots of weird ideologies and a lot of weird people in California, who believe a lot of very incorrect things. I would be surprised if 'white nationalists' were really high up on the list of threats to EA, especially given how... (read more)

7
JoshYou
I also agree that it's ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I'm not talking about conservatives, or the "IDW", or people who don't like the BLM movement or think racism is no big deal. I'd be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett's leaked emails. I don't think his defense is credible at all, by the way). I don't think it's accurate to see white nationalists in online communities as just the right tail that develops organically from a wide distribution of political views. White nationalists are more organized than that and have their own social networks (precisely because they're not just really conservative conservatives). Regular conservatives outnumber white nationalists by orders of magnitude in the general public, but I don't think that implies that white nationalists will be virtually non-existent in a space just because the majority are left of center.

Describing members of Leverage as "white nationalists" strikes me as pretty extreme, to the level of dishonesty, and is not even backed up by the comment that was linked. I thought Buck's initial comment was also pretty bad, and he did indeed correct his comment, which is a correction that I appreciate, and I feel like any comment that links to it should obviously also take into account the correction.

I have interfaced a lot with people at Leverage, and while I have many issues with the organization, saying that many white nationalists congregate there, and have congregated in the past, just strikes me as really unlikely. 

Buck's comment also says at the bottom: 

Edited to add (Oct 08 2019): I wrote "which makes me think that it's likely that Leverage at least for a while had a whole lot of really racist employees." I think this was mistaken and I'm confused by why I wrote it. I endorse the claim "I think it's plausible Leverage had like five really racist employees". I feel pretty bad about this mistake and apologize to anyone harmed by it.

I also want us to separate "really racist" from "white nationalist" which are just really not the same term, and which appear to me to ... (read more)

My description was based on Buck's correction (I don't have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don't believe. I don't mean to imply anything stronger than what Buck claimed about Leverage.

I invoked white nationalists not as a hypothetical representative of ideologies I don't like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.

Just to be clear, I don't think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like "advocating for nations based on values historically associated with whiteness", which would obviously include neoreaction, but would also presumably be a much more tenable position in discourse. So for now I am going to assume you mean something much more straightforwardly based on racial superiority, which also appears to be the Wikipedia definition.

I've debated with a number of neoreactionaries, and I've never seen them bring up much stuff about racial superiority.  Usually just arguing against democracy and in favor of centralized control and various arguments derived from that, though I also don't have a ton of datapoints. There is definitely a focus on the superiority of western culture in their writing and rhetoric, much of which is flawed and I am deeply opposed to many of the things I've seen at le... (read more)

9
abrahamrowe
Thanks for elaborating! I think that it seems like accusations of EA associations with white supremacy of various sorts come up enough to be pretty concerning.  I also think the claims would be equally concerning if JoshYou had said "white supremacists" or "really racist people" instead of "white nationalists" in the original post, so I feel uncertain that Buck stepping back the original post actually lessens the degree we ought to be concerned? I didn't really see the Nazi comparisons (I guess saying white nationalist is sort of one, but I personally associate white nationalism as a phrase much more with individuals in the US than Nazis, though that may be biased by being American). I guess broadly a trend I feel like I've seen lately is occasionally people writing about witnessing racism in the EA community, and having what seem like really genuine concerns, and then those basically not being discussed (at least on the EA Forum) or being framed as shutting down conversation.
3
Elityre
I don't follow how what you're saying is a response to what I was saying. I wasn't saying "the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms." I was saying if that I was mistaken about that "warming up effect", it would cause me to reconsider my view here. In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed "introductory spaces" like undergrad classes to become "safe spaces", thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate students and professors and demanded "safe spaces" everywhere they went. And how is anyone supposed to argue against "safety", especially once its importance has been institutionalized (i.e., departments were built in part to enforce "safe spaces", which can then easily extend their power beyond "introductory spaces").

ETA: Jonathan Haidt has a book and an Atlantic article titled The Coddling of the American Mind detailing problems caused by the introduction of "safe spaces" in universities.

6
EricHerboso
I don't think this is pivotal to anyone, but just because I'm curious: If we knew for a fact that a slippery slope wouldn't occur, and the "safe space" was limited just to the EA Facebook group, and there was no risk of this EA forum ever becoming a "safe space", would you then be okay with this demarcation of disallowing some types of discussion on the EA Facebook group, but allowing that discussion on the EA forum? Or do you strongly feel that EA should not ever disallow these types of discussion, even on the EA Facebook group? (by "disallowing discussion", I mean Hansonian level stuff, not obviously improper things like direct threats or doxxing)
8
Kaj_Sotala
I'm a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don't think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic? I have lots of friends in academia and follow academic blogs etc., and basically don't hear any of them talking about cancel culture within that context. I did recently see a philosopher recently post a controversial paper and get backlash for it on Twitter, but then he seemed to basically shrug it off since people complaining on Twitter didn't really affect him. This fits my general model that most of the cancel culture influence on academia comes from people outside academia trying to affect it, with varying success. I don't doubt that there are individual pockets with academia that are more cancely, but the rest of academia seems to me mostly unaffected by them.

I’m a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don’t think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

Professors are already overwhelmingly leftists or left-leaning (almost all conservatives have been driven away or self-selected away), and now even left-leaning professors are being canceled or fearful of being canceled. See:

and this comment in the comments section of a NYT story about cancel culture among the students:

Having just graduated from the University of Minnesota last year, a very liberal college, I believe these examples don’t adequately show how far cancel culture has gone and what it truly is. The examples used of disassociating from obvious homophobes, or more classic bullying that teenage girls have always done to each other since t

... (read more)
9
Kaj_Sotala
Thanks. It looks to me that much of what's being described at these links is about the atmosphere among the students at American universities, which then also starts affecting the professors there. That would explain my confusion, since a large fraction of my academic friends are European, so largely unaffected by these developments. I do hear them complain about various other things though, and I also have friends privately complaining about cancel culture in non-academic contexts, so I'd generally expect this to come up if it were an issue. But I could still ask, of course.
-10
ragyo_odan_kagyo_odan

EDIT: I realized that discussing this will not help me do more good or live a happier life so I'd rather not, but I'll leave it up for the record. You are welcome to reply to it.

Something I don't see discussed here is that there's a difference between a) not inviting a live speaker who has a history of being unpredictable and insensitive compared to b) refusing to engage with any of their ideas.

At this point, for my own mental health, I would not engage with Robin Hanson. If I knew he were going to be at an event and I'd have to interact with him, I wouldn't go. But I still might read one of his books - they've been through an editing process so I trust them to be more sensitive and more useful.

I see a lot of people saying "no one involved with EA would really object to Robin Hanson at an event" but there are actually a lot of us. And you can insult me however you want to - you can say that this makes me small-minded or irrational - but that won't make it an "effective" use of my time to hang around someone who's consistently unkind.

[This comment is no longer endorsed by its author]Reply

I appreciate you writing this and leaving it up, I feel basically the same (including the edit, so I'm pretty unlikely to reply to further comments) but felt better having seen your post, and think that you writing it was, in fact, doing good in this case (at least in making me and probably others not feel further separated from the community).

I think there's another difference between:

a) Thinking that a speaker shouldn't be allowed to speak at an event

b) Deciding not to attend an event with a confirmed speaker because you don't like their ideas

For the first half of your comment I thought you fell into camp b) but not camp a). However your last paragraph seems to imply you fall into both camps.

Personally I would not want a person to speak at an EA event if I thought they were likely to cause reputational damage to EA. In this particular case I (tentatively) don't think Hanson would have. Sure he's said some questionable things, but he was being invited to talk about tort law and I fail to see how allowing that signals condoning his questionable ideas. Therefore I would probably have let him speak and anyone who didn't want to hear him would obviously have been free to not attend.

It seems to me that people often imply that personally finding a speaker beyond the pale means that the speaker shouldn't be allowed to speak to anyone. I've always found this slightly odd.

Personally, I feel the same. I can engage with Robin's ideas online. I think he produces some interesting content. Also, some dumb content. I can choose to learn from either. I can notice if he 'offends' me and then decide I'm still interested in whether what he has to say might be useful somehow. ...That doesn't mean I have to invite the guy over to my house to talk with me about his ideas, because I realize that I wouldn't enjoy being around him in person. I think this is more common than people realize among people who know Robin. If Munich wanted to read and discuss his stuff, but not invite him to 'hang out,' I get it.

4
MaxRa
Thank you for writing it and keeping this up. I think it's really valuable that people share the discomfort they feel around the way some people discuss. I wonder if Kelsey Piper's discussion of competing access needs and safe spaces captures the issue at hand. * For some people it is really valuable to have a space where one can discuss sensitive topics without caring about offense, where taking offense is discouraged because it would hinder progressing the arguments. Maybe even a space where one is encouraged to let one's mind go to places that are uncomfortable, to develop one's thinking around topics where social norms discourage you to go. * For others, a space like this would be distressing, depressing and demotivating. A space like this might offer a few insights, but they seem not worth the emotional costs and there seem to be many other topics to explore from an EA perspective, so why spend any time there. I also hope that it is very easy for people to avoid spaces like this at EA conferences, e.g. to avoid a talk by Robin Hanson (though from the few talks of him that I saw I think his talks are much less "edgy" than the discussed blog posts). I wonder if it would be useful to tag sessions at an EA conference that would belong into the described space, or if people mostly correctly avoid sessions they would find discomforting already.
1
MaxRa
One idea in the direction of making discussion norms explicit that just came to my mind are Crocker's rules. I've heard that some people are unhappy with those rules. Maybe because they seem to signal what Khorton alluded to: "Oh, of course I can accommodate your small-minded irrational sensitivities if you don't want a message optimized for information". I know that they are/were used in the LessWrong Community Weekends in Berlin, where you would where a "Crocker's rules" sticker on your nametag.

Thanks for writing about this. This incident bothered me and I really appreciate your thoughts and find them clarifying. I also tend to feel really frustrated with people finding offense in arguments (and notice this frustration right now), just to flag this here.

To the extent that I support some of Hanson’s ideas and want to see them become better-known, I am annoyed that this is less likely to happen because of Hanson’s missteps.

I found it improper that you call it "missteps", as if he made mistakes. As you said, openly discussing sensitive topics will cause offense if you don't censor yourself a lot. You mention that his collegues do a better job at making controversial ideas more palatable, but again, as you suggested, maybe they actually spent more time editing their work. This seems like a tradeoff to me, and I'm not convinced that Hanson is making missteps and we should encourage him to change how he runs his blog to have a more positive impact. Not saying this is true for Hanson, but for some thinkers it might be draining to worry about people taking offense at their thoughts. I'm worried about putting pressure on an important thinker to direct mental resources to other things than having smart thoughts about important topics.

This seems like a tradeoff to me

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible. What's the point in that? If I'm right, I wouldn't want to gratify that. I think it's lacking nuance if you blanket object to the "misstep" framing, especially since that's still a relatively weak negative judgment. We probably want to be able to commend some people on their careful communication of sensitive topics, so we also have to be willing to call it out if someone is doing an absolutely atrocious job at it.

For reference, I have listened to a bunch of politically controversial podcasts by Sam Harris, and even though I think there's a bit of room to communicate even better, there were no remarks I'd label as 'missteps.' By contrast, several of Hanson's tweets are borderline at best, and at least one now-deleted tweet I saw was utterly insane. I don't think it'... (read more)

I don’t think he’s even trying, and maybe he’s trying to deliberately walk as close to the line as possible. What’s the point in that?

I can think of at least three reasons for someone to be "edgy" like that:

  1. To signal intelligence, because it takes knowledge and skill to be able to walk as close to a line as possible without crossing it. This could be the (perhaps subconscious) intent even if the effort ends up failing or backfiring.
  2. To try to hold one end of the overton window in place, if one was worried about the overton window shifting or narrowing.
  3. To try to desensitize people (i.e., reduce their emotional reactions) about certain topics, ideas, or opinions.

One could think of "edgy" people as performing a valuable social service (2 and 3 above) while taking a large personal risk (if they accidentally cross the line), while receiving the personal benefits of intelligence signaling as compensation. On this view, it's regretable that more people aren't willing to be "edgy" (perhaps because we as a culture have devalued intelligence signaling relative to virtue signaling), and as a result our society is suffering the negative consequences of an increasingly narrow overton wi... (read more)

Thanks, those are good points. I agree that this is not black and white, that there are some positives to being edgy.

That said, I don't think you make a good case for the alternative view. I wouldn't say that the problem with Hanson's tweets is that they cause "emotional damage."The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people's impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.

The "alternative view" ("emotional damage") I mentioned was in part trying to summarize the view apparently taken by EA Munich and being defended in the OP: "And yet, many people are actually uncomfortable with Hanson for some of the same reasons brought up in the Slate piece; they find his remarks personally upsetting or unsettling."

The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people’s impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.

This would be a third view, which I hadn't seen anyone mention in connection with Robin Hanson until now. I guess it seems plausible although I personally haven't observed the "negative effect" you describe so I don't know how big the effect is.

Two other reasons to be "edgy" came to my mind:

Signalling frank discussion norms - when the host of a discussion now and then uses words and phrases that would be considered insensitive among a general audience, people in this discussion can feel permitted to talk frankly without having to worry about how the framing of their argument might offend anybody.

Relatedly, I noticed feeling relieved when a person higher in status made a "politically incorrect" joke. I felt like I could relax some part of my brain that worries about saying something that in some context could cause offense and me being punished socially (e.g. being labeled "problematic", which seems to be happening much quicker than I'd like, also in EA circles).

Only half joking, if somebody would leak the chats I have had with my best friend over the years, there is probably something in there to deeply offend every person on Earth. So maybe another reason to be "edgy" is just that it's fun for some people to say things in a norm-violating way? I remember laughing out loudly at two of Hanson's breaches of certain norms. Some part of me is worried about how this make... (read more)

Thanks for the pushback, I'm still confused and it helped me think a bit better (I think). What do you think about the idea that the issue resolves around what Kelsey Piper called competing access needs? I explained how I think about it in this comment. I feel like I want to protect edgy think aloud spaces like those from Hanson. I feel like I benefit a lot from it and I feel like I (not being on any EA insides) am already excluded from many valuable but potentially offending EA think aloud spaces because people are not willing to bare the costs like Hanson does.

That all makes sense. I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters. It feels to me like the intention isn't just to discuss ideas with people of a certain access need, but also some element of deliberate provocation. (But maybe you could say that's just a side product of curiosity about where the lines are – I just feel like some of the tweet wordings were deliberately optimized to be jarring.) If it wasn't for that one tweet that Hanson now apologized for, I'd have less strong opinions on whether to use the term "misstep." (And the original post used it in plural, so you have a point.)

I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters.

Presumably every filter is associated with an edge, right? Like, the 'trolley problem' is a classic of philosophy, and yet it is potentially traumatic for the victims of vehicular violence or accidents. If that's a group you don't want to upset or offend, you install a filter to catch yourself before you do, and when seeing other people say things you would've filtered out, you perceive them as 'edgy'. "Don't they know they shouldn't say that? Are they deliberately saying that because it's edgy?"

[A more real example is that a friend once collected a list of classic examples and thought experiments, and edited all of the food-based ones to be vegan, instead of the original food item. Presumably the people who originally generated those thought experiments didn't perceive them as being 'edgy' or 'over the line' in some way.]

but also some element of deliberate provocation.

I read a lot of old books; for example, it's interesting to contrast the 1934 and 1981 editions of How to Win Friends and Influence Peopl... (read more)

Now, I'm not saying Hanson isn't deliberately edgy; he very well might be.

If you're not saying that, then why did you make a comment? It feels like you're stating a fully general counterargument to the view that some statements are clearly worth improving, and that it matters how we say things. That seems like an unattractive view to me, and I'm saying that as someone who is really unhappy with social justice discourse.

Edit: It makes sense to give a reminder that we may sometimes jump to conclusions too quickly, and maybe you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic. That would make sense – but then I have a different opinion.

you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic.

I'm afraid this sentence has too many negations for me to clearly point one way or the other, but let me try to restate it and say why I made a comment:

The mechanistic approach to avoiding offense is to keep track of the ways things you say could be interpreted negatively, and search for ways to get your point across while not allowing for any of the negative interpretations. This is a tax on saying anything, and it especially taxes statements on touchy subjects, and the tax on saying things backpropagates into a tax on thinking them.

When we consider people who fail at the task of avoiding giving offense, it seems like there are three categories to consider:

1. The Blunt, who are ignoring the question of how the comment will land, and are just trying to state their point clearly (according to them).

2. The Blithe, who would put effort into rewording their point if they knew how to avoid giving offense, but whose models of the audience are inadequate to the task.

3. The Edgy, who are optimizing for being 'on the line' or... (read more)

4
Lukas_Gloor
Thanks, that makes sense to me now! The three categories are also what I pointed out in my original comment: Okay, so you cared mostly about this point about mind reading: This is a good point, but I didn't find your initial comment so helpful because this point against mind reading didn't touch on any of the specifics of the situation. It didn't address the object-level arguments I gave: I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered. If I read your comment as "I don't want to comment on the specific tweets, but your interpretation might be a bit hasty" – that makes perfect sense. But by itself, it felt to me like I was being strawmanned for not being aware of obvious possibilities. Similar to khorton, I had the impulse to say "What does this have to do with trolleys, shouldn't we, if anything, talk about the specific wording of the tweets?" Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go." (And while one could try to defend this as just blunt or blithe, I think the reasoning would have to be disanalogous to your trolley or food examples, because it's not like it should be surprising to any Western person in the last two decades that rape is a particularly sensitive topic – very unlike the "changing animal food to vegan food" example you gave.)

Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go."

I am always really confused when someone brings up this point as a point of critique. The substance of Hanson's post where he used that phrase just seemed totally solid to me. 

I feel like this phrase is always invoked to make the point that Hanson doesn't understand how bad rape is, or that he somehow thinks lots of rape is "gentle" or "silent", but that has absolutely nothing to do with the post where the phrase is used. The phrase isn't even referring to rape itself! 

When people say things like this, my feeling is that they must have not actually read the original post, where the idea of "gentle, silent rape" was used as a way to generate intuitions not about how bad rape is, but about how bad something else is (cuckoldry), and about how our legal system judges different actions in a somewhat inconsistent way. Again, nowhere in that series of posts did Hanson say that rape was in any way not bad, or not traumatic, or not something that we should obviously try to prevent with a substantial fraction of our resources. And given the relati... (read more)

I did read the post, and I mostly agree with you about the content (Edit: at least in the sense that I think large parts of the argument are valid; I think there are some important disanalogies that Hanson didn't mention, like "right to bodily integrity" being way clearer than "moral responsibility toward your marriage partner"). I find it weird that just because I think a point is poorly presented, people think I disagree with the point. (Edit: It's particularly the juxtaposition of "gently raped" that comes also in the main part of the text. I also would prefer more remarks that put the reader at ease, e.g., repeating several times that it's all just a thought experiment, and so on.)

There's a spectrum of how much people care about a norm to present especially sensitive topics in a considerate way. You and a lot of other people here seem to be so far on one end of the spectrum that you don't seem to notice the difference between me and Ezra Klein (in the discussion between Sam Harris and Ezra Klein, I completely agreed with Sam Harris.) Maybe that's just because there are few people in the middle of this spectrum, and yo... (read more)

I find it weird that just because I think a point is poorly presented, people think I disagree with the point.

Sorry! I never meant to imply that you disagree with the point. 

My comment in this case is more: How would you have actually wanted Robin Hanson to phrase his point? I've thought about that issue a good amount, and like, I feel like it's just a really hard point to make. I am honestly curious what other thing you would have preferred Hanson to say instead. The thing he said seemed overall pretty clear to me, and really not like an attempt to be intentionally edge or something, and more that the point he wanted to make kind of just had a bunch of inconvenient consequences that were difficult to explore (similarly to how utilitarianism quickly gives rise to a number of hard to discuss consequences that are hard to explore).

My guess is you can probably come up with something better, but that it would take you substantial time (> 10 minutes) of thinking. 

My argument here is mostly: In context, the thing that Robin said seemed fine, and I don't expect that many people who read that blogpost actually found his phrasing that problematic. The thing that I expect to hav... (read more)

7
vaniver
In my original comment, I was trying to resolve the puzzle of why something would have to appear edgy instead of just having fewer filters, by pointing out the ways in which having unshared filters would lead to the appearance of edginess. [On reflection, I should've been clearer about the 'unshared' aspect of it.]
2
Kirsten
Comparing trolley accidents to rape is pretty ridiculous for a few reasons: 1. Rape is much more common than being run over by trolleys. 2. Rape is a very personal form of a violence. I'm not sure anyone has ever been run over by a trolley on purpose in all of history. 3. If you're talking to a person about trolley accidents, they're very unlikely to actually run you over, no matter how cheerful they seem, because most people don't have access to trolleys. If you're talking to a man about rape and he thinks it's not a big deal, there's some chance he'll actually rape you. In some cases, the conversation includes an implicit threat.
If you're talking to a man about rape and he thinks it's not a big deal, there's some chance he'll actually rape you.

I realise you did not say this applied to Robin, but just in case anyone reading was confused and mistakenly thought it was implicit, we should make clear that Robin does not think rape is 'not a big deal'. Firstly, opposition to rape is almost universal in the west, especially among the highly educated; as such our prior should be extremely strong that he does think rape is bad. In addition to this, and despite his opposition to unnecessary disclaimers, Robin has made clear his opposition to rape on many occasions. Here are some quotations that I found easily on the first page of google and by following the links in the article EA Munich linked:

I was not at all minimizing the harm of rape when I used rape as a reference to ask if other harms might be even bigger. Just as people who accuse others of being like Hitler do not usually intend to praise Hitler, people who compare other harms to rape usually intend to emphasize how big are those other harms, not how small is rape.

https://www.overcomingbias.com/2014/11/hanson-loves-moose-caca... (read more)

Yes, I'm not saying that Robin Hanson is a criminal, and it's good to point out that he's not pro-rape. Thanks for that.

I was thinking about what it would look like for the whole EA community to generally try to avoid upsetting people who have been traumatized by rape, and comparing that to if the EA community tried to avoid upsetting people who have been traumatized by trolley accidents, which was a suggestion above.

My intuition about the base rate of people who have experienced sexual assault and how often sexual assault happens at EA events is probably different from yours which may explain our different approaches to this topic.

3
ragyo_odan_kagyo_odan
How often does sexual assault and/or rape happen at EA events, in your opinion? Are we talking 1 in 10 events, 1 in 100, 1 in 1000?
Comparing trolley accidents to rape is pretty ridiculous for a few reasons:

I think you're missing my point; I'm not describing the scale, but the type. For example, suppose we were discussing racial prejudice, and I made an analogy to prejudice against the left-handed; it would be highly innumerate of me to claim that prejudice against the left-handed is as damaging as racial prejudice, but it might be accurate of me to say both are examples of prejudice against inborn characteristics, are perceived as unfair by the victims, and so on.

And so if you're not trying to compare expected trauma, and just come up with rules of politeness that guard against any expected trauma above a threshold, setting the threshold low enough that both "prejudice against left-handers" and "prejudice against other races" are out doesn't imply that the damage done by both are similar.


That said, I don't think I agree with the points on your list, because I used the reference class of "vehicular violence or accidents," which is very broad. I agree there's an important disanalogy in that 'forced choices' like in the trolley problem are high... (read more)

If you think my arguments are incorrect, it would be useful to explain how rather than silently downvoting.

I am starting to wonder if I will be downvoted on the EA Forum any time I point out that rape is bad. That can't be why people downvote these comments, right?

I'm glad you came back to look at this discussion again because I found your comments here (and generally) really valuable. I refrained from upvoting your comment because you called the comparison "pretty ridiculous". I would feel attacked if you called my reasoning ridiculous and would be less able to constructively argue with you.

I think you are right when pointing out that some topics are much more sensitive to many more people, and EAs being more careful around those topics makes our community more welcoming to more people. That said, I understood vaniver's point was to take an example where most people reading it would not feel like it is a sensitive topic, and *even there* you might upset some people (e.g. if they stumble on a discussion comparing the death of five vs. one). So the solution should not be to punish/deplatform somebody that discussed a topic in a way that was upsetting for someone, and going forward stop people from thinking publically when touching potentially upsetting topics, but something else.

That's a very helpful overview, thank you.

I'm fairly sure the real story is much better than that, although still bad in objective terms: In culture war threads, the typical norms re karma roughly morph into 'barely restricted tribal warfare'. So people have much lower thresholds both to slavishly upvote their 'team',and to downvote the opposing one.

I downvoted the above comment by Khorton (not the one asking for explanations, but the one complaining about the comparison of Trolley's and rape), and think Larks explained part of the reason pretty well. I read it in substantial parts as an implicit accusation of Robin to be in support of rape, and also seemed to itself misunderstand Vaniver's comment, which wasn't at all emphasizing a dimension of trolley problems that made a comparison with rape unfitting, and doing so in a pretty accusatory way (which meerpirat clarified below).

I agree that voting quality somewhat deteriorates in more heated debates, but I think this characterization of how voting happens is too uncharitable. I try pretty hard to vote carefully, and often change my votes multiple times on a thread if I later on realize I was too quick to judge something or misunderstood someone, and really spend a lot of time reconsidering and thinking about my voting behavior with the health of the broader discourse in mind, so I am quite confident about my own voting behavior being mischaracterized by the above. 

I've also talked to many other people active on LessWrong and the EA Forum over the years, and a lot of people seem to put a lot of effort into how they vote, so I am also reasonably confident many others also spend substantial time thinking about their voting in a way that really isn't well-characterized by "roughly morphing barely restricted tribal warfare". 

2
Linch
I am reasonably confident that this is the best first-order explanation. EDIT: Habryka's comment makes me less sure that this is true.

Thanks for the feedback. I think the word "missteps" is too presumptive for the reasons you outlined, and I've changed it to "decisions." I also added a caveat noting that the controversies he's provoked may lead to his ideas becoming better-known generally (though it's really hard to determine the overall effect).

That said, my impression is that, over time, the EA movement has become more attentive to various kinds of diversity, and more cautious about avoiding public discussion of ideas likely to cause offense. This involves trade-offs with other values.

I am skeptical of this. The EA survey shows that one of the most under-represented group in EA is conservatives, and I have seen little sign that EAs in general, and CEA in particular, have become more cautious about public discussion that will offend conservatives.

Similarly, I don't think there is much evidence of people suppressing ideas offensive to older people, or religious people, even though these are also dramatically under-represented groups.

I think a more accurate summary would be that as EA has grown, it has become subject to Conquest's Second Law, and this has made it less tolerant of various views and people currently judged to be unacceptable by SJWs. Specifically, I would be surprised if there was much evidence of EAs/CEA being more cautious about publicly discussing 'woke' views out of fear of offending liberals or conservatives.

Specifically, I would be surprised if there was much evidence of EAs/CEA being more cautious about publicly discussing 'woke' views out of fear of offending liberals or conservatives.

I hear frequently from people who express fear of discussing "woke" views on the Forum or in other EA discussion spaces. They (reasonably) point out that anti-woke views are much more popular, and that woke-adjacent comments are frequently heavily downvoted. All I have is a series of anecdotal statements from different people, but maybe that qualifies as "evidence"?

My model of this is that there is a large fraction of beliefs in the normal Overton window of both liberals and conservatives, that are not within the Overton window of this community. From a charitable perspective, that makes sense, lots of beliefs that are accepted as Gospel in the conservative community seem obviously wrong to me, and I am obviously going to argue against them. The same is true for many beliefs in the liberal community. Since many more members of the community are liberal, we are going to see many more "woke" views argued against, for two separate reasons: 

  1. Many people assume that all spaces they inhabit are liberal spaces, the EA community is broadly liberal, and so they feel very surprised if they say something that everywhere else is accepted as obvious, suddenly get questioned here (concrete examples that I've seen in the past that I am happy to see questioned are: "there do not exist substantial cognitive differences between genders", "socialized healthcare is universally good", "we should drastically increase taxes on billionaires", "racism is obviously one of the most important problems to be working on").
  2. There are simply many more liberal people so y
... (read more)

Just logging in to say that, as someone who co-ran a large university EA group for three years (incidentally the one that Aaron founded many years prior!), I find it plausible that, in some scenarios, the decision that EA Munich made would be the all-things-considered best one.

Some of the people who have spent the most time doing the above came to the conclusion that EA should be more cautious and attentive to diversity.

Edited from earlier comment: I think I am mostly confused what diversity has to do with this decision. It seems to me that there are many pro-diversity reasons to not deplatform Hanson. Indeed, the primary one cited, one of intellectual diversity and tolerance of weird ideas, is primary an argument in favor of diversity. So while diversity plays some role, I think I am actually confused why you bring it up here. 

I am saying this because I wanted to argue against things in the last section, but realized that you just use really high-level language like "diversity and inclusion" which is very hard to say anything about. Of course everyone is in favor of some types of diversity, but it feels to me like the last section is trying to say something like "people who talked to a lot of people in the community tend to be more concerned about the kind of diversity that having Robin as a speaker might harm", but I don't actually know whether that's what you mean. But if you do mean it, I think that's mostly backwards, based on the evidence I have seen.

I maybe should have said something like "concerns related to social justice" when I said "diversity." I wound up picking the shorter word, but at the price of ambiguity.

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.

A simple example of the kind of thing I'm thinking of (which I'm aware is too simplistic to represent reality in full, but does draw from the experiences of people I've met): 

A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson'... (read more)

I maybe should have said something like “concerns related to social justice” when I said “diversity.” I wound up picking the shorter word, but at the price of ambiguity.

I find it interesting that you thought "diversity" is a good shorthand for "social justice", whereas other EAs naturally interpreted it as "intellectual diversity" or at least thought there's significant ambiguity in that direction. Seems to say a lot about the current moment in EA...

Getting the right balance seems difficult.

Well, maybe not, if some of the apparent options aren't real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as I'm starting to suspect that EA has taken quite a few steps down the slope already.)

It may be that in the end EA can't fight (i.e., can't win against) SJ-like dynamics, and therefore EA joining cancel culture is more "effective" than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and I'll d... (read more)

7
Aaron Gertler 🔸
I don't think it says much about the current moment in EA. It says a few things about me:  * That I generated the initial draft for this post in the middle of the night with no intention of publishing * That I decided to post it in a knowingly imperfect state rather than fiddling around with the language at the risk of never publishing, or publishing well after anyone stopped caring (hence the epistemic status) * That I spend too much time on Twitter, which has more discussion of demographic diversity than other kinds. Much of the English-speaking world also seems to be this way: Is there such a slope? It seems to me as though cultures and institutions can swing back and forth on this point; Donald Trump's electoral success is a notable example. Throughout American history, different views have been cancel-worthy; is the Overton Window really narrower now than it was in the 1950s? (I'd be happy to read any arguments for this being a uniquely bad time; I don't think it's impossible that a slippery slope does exist, or that this is as bad as cancel culture has been in the modern era.) If you have any concerns about specific moderation decisions or other elements of the way the Forum is managed, please let me know! I'd like to think that we've hosted a variety of threads on related topics while managing to maintain a better combination of civility and free speech than almost any other online space, but I'd be surprised if there weren't ways for us to improve. As for not mentioning the possibility ; had I written for a few more hours, there might have been 50 or 60 bullet points in this piece, and I might have bounced between perspectives a dozen more times, with the phrase "slippery slope" appearing somewhere. As I said above, I chose a relatively arbitrary time to stop, share what I had with others, and then publish. I'm open to the possibility that a slippery slope is almost universal when institutions and communities tackle these issues, but I also think tha

I’d be happy to read any arguments for this being a uniquely bad time

There were extensive discussions around this at https://www.greaterwrong.com/posts/PjfsbKrK5MnJDDoFr/have-epistemic-conditions-always-been-this-bad, including one about the 1950s. (Note that those discussions were from before the recent cluster of even more extreme cancellations like David Shor and the utility worker who supposedly made a white power sign.)

ETA: See also this Atlantic article that just came out today, and John McWhorter's tweet:

Whew! Because of the Atlantic article today, I am now getting another flood of missives from academics deeply afraid. Folks, I hear you but the volume outstrips my ability to write back. Please know I am reading all of them eventually, and they all make me think.

If you're not sure whether EA can avoid sharing this fate, shouldn't figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of "50 or 60 bullet points"? (Unless you know that others are already working on the problem, and it sure doesn't sound like it.)

6
Aaron Gertler 🔸
Thanks for linking to those discussions.  Having read through them, I'm still not convinced that today's conditions are worse than those of other eras. It is very easy to find horrible stories of bad epistemics now, but is that because there are more such stories per capita, or because more information is being shared per capita than ever before?  (I should say, before I continue, that many of these stories horrify me — for example, the Yale Halloween incident, which happened the year after I graduated. I'm fighting against my own inclination to assume that things are worse than ever.) Take John McWhorter's article. Had a professor in the 1950s written a similar piece, what fraction of the academic population (which is, I assume, much larger today than it was then) might have sent messages to them about e.g. being forced to hide their views on one of that era's many taboo subjects? What would answers to the survey in the article have looked like? Or take the "Postcard from Pre-Totalitarian America" you referenced.  It's a chilling anecdote... but also seems wildly exaggerated in many places. Do those young academics actually all believe that America is the most evil country, or that the hijab is liberating? Is he certain that none of his students are cynically repeating mantras the same way he did? Do other professors from a similar background also think the U.S. is worse than the USSR was? Because this is one letter from one person, it's impossible to tell. Of course, it could be that things really were better then, but the lack of data from that period bothers me, given the natural human inclination to assume that one's own time period is worse than prior time periods in various ways. (You can see this on Left Twitter all the time when today's economic conditions are weighed against those of earlier eras.)   But whether this is the worst time in general isn't as relevant as: Taking this question literally, there are a huge number of fates I'm not sure EA c

I think the biggest reason I'm worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/arts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and I've personally seen it in action in my local parent groups. Doesn't that give you a high enough base rate that you should think "I better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/dynamics don't apply to EA"?

Your reasoning (from another comment) is "I've seen various incidents that seem worrying, but they don't seem to form a pattern." Well if you only get seriously worried once there's a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/cultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.

For w

... (read more)

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.

But isn't it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter). 

That is why it's so important to not give into those people'... (read more)

5
Ben_West🔸
I'd be curious how many people you think are not willing to "tolerate real intellectual diversity". I'm not sure if you are saying   * "Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost," or   * "Anyone who is upset by intellectual diversity isn't someone we want to attract anyway, so losing them isn't a real cost." (Presumably you are saying something between these two points, but I'm not sure where.)

No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn't associate with people they think are wrong. So the real choice is not between "having all the people who think you shouldn't associate with people who think they are wrong" and "having all the weird intellectually independent people", it is instead between "having an intellectually uniform and conformist slice of the people who don't want to be associated with others they disagree with" and "having a  quite intellectually diverse crowd of people who are tolerating dissenting opinions", with the second possibly actually being substantially larger, though generally I don't think size is the relevant constraint to look at here.

I think you're unintentionally dodging both Aaron's and Ben's points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn't the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn't sound like what you're talking about.

> A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson's were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.

Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don't care about someone who would distance themselves from EA for this reason anyway (it's not an actual 'loss'). And I'm not sure which it is or (if the first) what percentages you would give.

The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn't even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.

This doesn't seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options: 

1. They are uncommon

2. They are not valuable

I am proposing a third option which is: 

3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it's really hard to integrate them into a discourse that might come to unintuitive conclusions if they ... (read more)

[EDIT: As Oli's next reponse notes, I'm misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]

***

Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben's first option, since it's just a non-numeric write-up of what Ben said:

Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost

I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.

***

At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don't want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I'm generally willing t... (read more)

No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don't like is a world where you both won't have a lot of people, and you will have tons of polarization and internal conflict. 

How is your summary at all compatible with what I said, given that I explicitly said: 

with the second (the one where we select on tolerance) possibly actually being substantially larger

That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying. 

I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of "if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart". You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don't claim that I am saying something I am pretty clearly not saying. 

Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.

Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.

That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.

Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.

Yeah, sorry, I do think the "by necessity" was too strong. 

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you

As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).

8
Aaron Gertler 🔸
I agree with this.  Had EA Munich hosted Hanson and then been attacked by people using language similar to that of the critics in Hanson's Twitter thread, I may well have written a post excoriating those people for being uncharitable. I would prefer if we maintained a strong norm of not creating personal risks for people who have to handle difficult questions about speech norms (though I acknowledge that views on which questions are "difficult" will vary, since different people find different things obviously acceptable/unacceptable).

On a meta-level, the attitude we have towards "cancellation from a public event" is fairly weird. If only EA Munich chose not to host Hanson's talk to begin with, we almost certainly wouldn't have this discussion. Having instead chosen to host a talk and then changing their minds, they now face lots of handwringing and prompted a larger EA internet conversation.

This feels structurally similar to what Jai Withani calls "The Copenhagen Interpretation of Ethics," though of course it is not exactly the same.

I don't quite understand this asymmetry (though I too feel a similar draw to think/opine in great detail about the "withdraw an event" case, but not the "didn't choose to hold an event " case). But in terms of first-order outcomes, they seem quite similar*!

*They're of course not identical, for example asking someone to give a talk and then changing your mind is professionally uncourteous, can waste speaker's preparation time, etc. But I think in the first order, the lack of professional courtesy and the (say) 2 hours time wasted is quite small compared to the emotional griping we've had.

It sends public signals that you'll submit to blackmail and that you think people shouldn't affiliate with the speaker. The former has strong negative effects on others in EA because they'll face increased blackmail threats, and the latter has negative effects on the speaker and their reputation, which in turn makes it less likely for interesting speakers to want to speak with EA because they expect EA will submit to blackmail about them if any online mob decides to put their crosshairs on that speaker today.

Talk of 'blackmail' (here and elsethread) is substantially missing the mark. To my understanding, there were no 'threats' being acquiesced to here.

If some party external to the Munich group pressured them into cancelling the event with Hanson (and without this, they would want to hold the event), then the standard story of 'if you give in to the bullies you encourage them to bully you more' applies.

Yet unless I'm missing something, the Munich group changed their minds of their own accord, and not in response to pressure from third parties. Whether or not that was a good decision, it does not signal they're vulnerable to 'blackmail threats'. If anything, they've signalled the opposite by not reversing course after various folks castigated them on Twitter etc.

The distinction between 'changing our minds on the merits' and 'bowing to public pressure' can get murky (e.g. public outcry could genuinely prompt someone to change their mind that what they were doing was wrong after all, but people will often say this insincerely when what really happened is they were cowed by opprobrium). But again, the apparent abse... (read more)

Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson's views qua his views alone, but basically all organizers who spoke at the debrief I was part of said that they were interested in hearing Robin's ideas and would have enjoyed participating in an event with him, and were primarily worried about how others would perceive it and react to inviting him.

As such, blackmail feels like a totally fair characterization of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).

More importantly, I am really confused why you would claim so confidently that no threats were made. The prior for actions like this being taken in response to implicit threats is really high, and talking to any person who has tried to organizing events like this, will sho... (read more)

I found it valuable to hear information from the debrief meeting, and I agree with some of what you said - e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I'm not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.

I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like "blackmail is (almost) always bad". Such principles could ground criticism that's independent from the content of beliefs, values, and norms: "I don't care what this is about, structurally your actions are blackmail, and so they're bad."

I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing "deplatforming" and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on ce... (read more)

talk of blackmail suggests an appeal to clear-cut principles like "blackmail is (almost) always bad"

One ought to invite a speaker who has seriously considered the possibility that blackmail might be good in certain circumstances, written blog posts about it etc.

https://www.overcomingbias.com/2019/02/checkmate-on-blackmail.html

there was also one very explicit threat made to the organizers at EA Munich, at least if I remember correctly, of an organization removing their official affiliation with them if they were to host Hanson.

If I were reading this and didn't know the facts, I would assume the organization you're referring to might be CEA. I want to make clear that CEA didn't threaten EA Munich in any way. I was the one who advised them when they said they were thinking of canceling the event, and I told them I could see either decision being reasonable. CEA absolutely would not have penalized them for continuing with the event if that's how they had decided.

Yes! This was definitely not CEA. I don't have any more info on what organization it is (the organizers just said "an organization").

Sorry, didn't mean to imply that you intended this - just wanted to be sure there wasn't a misunderstanding.

FYI, I read this, didn't know the facts, and it didn't occur to me that the organisation Habryka was referring to was CEA - I think my guess was that it was maybe some other random student group?

6
Linch
It didn't occur to me that the organization was CEA but I also didn't read it too carefully.
As such, blackmail feels like a totally fair characterization [of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).]

As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent's remarks about it being bad to submit to blackmail are inapposite.

Crucially, (q.v. Daniel's comment), not all instances where someone says (or implies), "If you do X (which I say harms my interests), I'm going to do Y (and Y harms your interests)" are fairly characterised as (essentially equivalent to) blackmail. To give a much lower resolution of Daniel's treatment, if (conditional on you doing X) it would be in my interest to respond with Y independent of any harm it may do to you (and any coercive pull it would have on you doing X in the first place), informing you of my intentions is credibly not a blackmail attempt, but a better-faith "You do X then I do Y is our BATNA here, can we negotiate something better?" (In some treatments these are termed warnings versus thr... (read more)

I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it's a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a warning or a threat is important, and is one of the key pieces I would want people to model when thinking about situations like this, and so your relatively clear explanation of that is appreciated (as well as the reminder for me to keep the costs of premature retaliation in mind).

Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out.

This just seems like straightforward misrepresentation? What fervid hyper... (read more)

3
Misha_Yagudin
We probably wouldn't know and hence the issue wouldn't ger discussed. It is plausible that if someone made widely known that they decided not to invite a speaker based on similar considerations it could have been discussed as well. As I expect "X is deplatformed by Y" to provoke a similar response to "X is canceled by Y" by people caring about the incident. I am not sure it is a case of The Copenhagen Interpretation of Ethics as I doubt people who are arguing against would think that the decision is an improvement upon the status quo.
2
Linch
Hmm in my parent comment I said "structurally similar, though of course it is not exactly the same" which means I'm not defending that it's exactly a case. However upon a reread I actually think considering it a noncentral example is not too badly off. I think the following (the primary characterization of Copenhagen Interpretation of Ethics) is a fairly accurate representation: However it does not fill the secondary constraints Jai lays out: In this case, by choosing to invite a speaker and then (privately) cancelling it, they've indeed made the situation worse by a) wasting Hanson's time and b) mildly degraded professional norms. But that level of badness seems on the whole pretty mediocre/mundane to first order.
Some of Hanson’s writing has probably been, on net, detrimental to his own influence...

https://xkcd.com/137

I know that this is probably about clearly illustrating the emotional impetus behind one viewpoint, but I can't get on board with people going "fuck that shit" at difficult tradeoffs.

I think this is a complex issue, and a confident stance would require a fair bit of time of investigation.

I don't like the emotional hatred going on on both sides. I'd like to see a rational and thoughtful debate here, not a moralistic one. I don't want to be part of a community where people are colloquially tarred and feathered for making difficult decisions. I could imagine that many of us may wind up in similar positions one day. 

So I'd like discussion of Robin Hanson to be done thoughtfully, and also discussions of EA Munich to be done thoughtfully. 

The [Twitter threads](https://twitter.com/pranomostro1/status/1293267131270864903) seem like a mess to me. There are a few thoughtful comments, but tons of misery (especially from anonymous accounts and the like). I guess this is one thing that Twitter is just naturally quite poor at. 

There are a lot of hints that the the EA Munich team is exhausted over the response:

"Note that this document tries to represent the views of 8 different people on a controversial topic, compiled within a couple of hours, and is therefore necessarily simplifying."

"Because we're kind of overwhelmed with the situation, we won't be able to r... (read more)

To be more clear, I think the snarky comments on Twitter on both sides are a pretty big anti-pattern and should be avoided. They sometimes get lots of likes, which is particularly bad. 

I certainly agree that it would be great if the debate was thoughtful on all sides. But I am reluctant to punish emotional responses in these contexts.

When I look at this thread, I see a lack of women participating. Exceptions: Khorton, and Julia clarifying a CEA position. There were also a couple of people whose gender I could not quickly identify.

There are various explanations for this. I am not sure the gender imbalance on this thread is actually worse than on other threads. It could be noise. But I know why I said nothing: I found writing a thoughtful, non-emotional response too hard. I expect to fail because the subject is too upsetting.

This systematically biases the debate in favour of people who bear no emotional cost in participating.

In the 'Recent Discussion' feed of the front page of the EA forum, I found this page between Owen Cotton-Barratt's AMA and this question about insights in longtermist macrostrategy. The AMA had 9 usernames that appeared male to me, no usernames that appeared female to me, and 3 usernames whose gender I couldn't discern. The macrostrategy discussion had 12 names that appeared male to me, 1 that I gathered was female based on this comment, and 3 whose gender I couldn't discern. This should obviously be taken with a grain of salt, since determining gender from usernames is a tricky business.

Interesting, and thanks, Denise for a different take. When I read Ozzie's comment, I thought he meant that the people leaping to Robin's defense should consider that they might be over-emotion, chill out a bit, and practice their rationality skills. Which, I would agree with. I don't think there's *no* concern that reasonable people could have here. I can think of several concerns, some of which have been pointed out in the comments on this post. But I think people who are freaked out by this one decision seem just as likely to be reacting with the kind of knee-jerk fear, tribalism, confirmation bias, and slippery slope thinking that they'd be quick to criticize in others. This is human, but honestly, it's disappointing. I'm appreciating the more measured responses on this post, though there's still some catastrophizing that seems kind of tiresome. There's so much of that going around in the world, I'd like to see EAs or rationalists handle it better.

Thanks for the points Denise, well taken.  

I think the issue of "how rational vs. emotional should we aim for in key debates" (assume there is some kind of clean distinction) is quite tricky.

I would point out some quick thoughts, that might be wrong.
1. I'm also curious to better understand why there isn't more discussion by women here. I could imagine a lot of possible reasons for this. It could be that people don't feel comfortable providing emotional responses, but it could also be that people notice that responses on the other side are so emotional that there may be severe punishment.
2. Around the EA community and on Twitter, i see much more emotional-seeming arguments in support of Robin Hanson than for him. Twitter is really the worst at this.
3. Courts have established procedures for ensuring that both judges and the juries are relatively unbiased, fair, and (somewhat) rational. There's probably some interesting theory here we could learn from.
4. I could imagine a bunch of scary situations where important communication gets much more emotional. If they get less emotional, it's trickier to tell. I like to think that rationally minded people could help seek out biases like the one you mention and respond accordingly, instead of having to modify a large part of the culture to account for it.

9
Kirsten
"Courts have established procedures for ensuring that both judges and the juries are relatively unbiased, fair, and (somewhat) rational. There's probably some interesting theory here we could learn from." In this analogy, I don't feel like I'm commenting as a rational member of the jury, I feel like I'm commenting as an emotional witness to the impact of tolerating sexist speech in the EA community.
4
Ozzie Gooen
Yea, I think the court analogy doesn't mean we should all aim to be "rational", but that some of the key decision makers and discussion should hold a standard. Having others come in as emotional witnesses makes total sense, especially if it's clear that's what's happening. 
The more time someone spends talking to a variety of community members (and potential future members), the more likely they are to have an accurate view of which norms will best encourage the community’s health and flourishing.

Correctness is not a popularity contest, it feels like this is an intellectual laundering of groupthink. Also, if you promote a particular view, that *changes* who is going to be a member of the community in the future, as well as who is excluded.

For example, the EA community has decided to exclude Robin Hanson and be more inclusive towards Slate journalists and people who like the opinions of Slate; this defines a future direction for the movement, rather than causing a fixed movement to either flourish or not.

5
Aaron Gertler 🔸
This isn't at all what I was trying to say. Let me try to restate my point:  "If you want to have an accurate view of people say will help them flourish in the community, you're more likely to achieve that by talking to a lot of people in the community." Of course, what people claim will help them flourish may not actually help them flourish, but barring strong evidence to the contrary, it seems reasonable to assume some correlation. If members of a community differ on what they say will help them flourish, it seems reasonable to try setting norms that help as many community members as possible (though you might also adjust for factors like members' expected impact, as when 80,000 Hours chooses a small group of people to advise closely). ***** Whether EA Munich decides to host a Robin Hanson talk hardly qualifies as "the EA community deciding to exclude Robin Hanson and being more inclusive towards Slate journalists," save in the sense that what eight people in one EA group do is a tiny nudge in some direction for the community overall. In general, the EA community tends to treat journalists as a dangerous element, to be managed carefully if they are interacted with at all.  For example, the response to Scott Alexander's near-doxxing (which drew much more attention than the Hanson incident) was swift, decisive, and near-unified in favor of protecting speech and unorthodox views from those who threatened them. To me, that feels much more representative of the spirit of EA than the actions of, again, a single group (who were widely criticized afterward, and didn't get much public support).
1
ragyo_odan_kagyo_odan
If it's only a tiny nudge, why are we talking about it? Why is it important for a teacher to give a harsh detention to the first student who challenges their authority, or for countries to defend their borders strictly rather than let it slide if someone encroaches just a few kilometres? An expectation is being set here. Worse, an expectation has been set that threats of protest are a legitimate way to influence decision-making in our community. You have ceded authority to Slate by obeying their smear-piece on Hanson. Hanson is one of our people, you left him hanging in favour of what Slate thought. EA people are, IMO, being naïve.
6
Aaron Gertler 🔸
I'm talking about something I considered a tiny nudge because I thought that a lot of people, including people who are pretty influential in communities I care about it, either reacted uncharitably or treated the issue as a much larger deal than it was.  To whom is "you" meant to refer? I don't work on CEA's community health team and I've never been in contact with EA Munich about any of this.  I also personally disagreed with their decision and (as I noted in the post) thought the Slate piece was really bad. But my disagreeing with them doesn't mean I can't try to think through different elements of the situation and see it through the eyes of the people who had to deal with it.
3
ragyo_odan_kagyo_odan
I think the issue here is attempting to unilaterally disarm in a culture war. If your attitude is "let's through different elements of the situation and see it through the eyes of the people" , and their attitude is "let's use the most effective memetic superweapons we have access to to destroy everyone we disagree with", then you're going to lose and they are going to win.

A stark conclusion of "you're going to lose" seems like it's updating too much on a small number of examples. 

For every story we hear about someone being cancelled, how many times has such an attempt been unsuccessful (no story) or even led to mutual reconciliation and understanding between the parties (no story)? How many times have niceness, community, and civilization won out over opposing forces?

(I once talked to a professor of mine at Yale who was accused by a student of sharing racist material. It was a misunderstanding. She resolved it with a single brief email to the student, who was glad to have been heard and had no further concerns. No story.)

I'm also not sure what your recommendation is here. Is it "refuse to communicate with people who espouse beliefs of type X"? Is it "create a centralized set of rules for how EA groups invite speakers"?

Thanks for writing this post. I'm glad this incident is getting addressed on the EA forum. I agree with most of the points being made here.

However, I'm not sure if 'becoming more attentive to various kinds of diversity' and maintaining norms that allow for 'the public discussion of ideas likely to cause offense' have to be at odds. In mainstream political discourse it often sounds like this is the case, however I would like to think that EA might be able to balance these two concerns without making any significant concessions.

The reason I think this might be possible is because discussions among EAs tend to be more nuanced than most mainstream discourse, and because I expect EAs to argue in good faith and to be well intentioned. I find that EA concerns often transcend politics, and so I would expect two EAs with very different political views to be able to have more productive discussions on controversial topics than two non-EAs.

4
Aaron Gertler 🔸
I think this is true, but even if EA discussion might be more productive, I still think trade-offs exist in this domain. Given that the dominant culture in many intellectual spaces holds that public discussion of certain views is likely to cause harm to people, EA groups risk appearing very unwelcoming to people in those spaces if they support discussion of such views.  It may be worthwhile to have these discussions anyway, given all the benefits that come with more open discourse, but the signal will be sent all the same.

I would really appreciate if commentators were more careful to speak about this specific instance of uninviting a speaker instead of uninviting speakers in general, or at least clarify why they choose to speak about the general case.

I am not sure whether they choose to speak about the general case because they think uninviting in this particular case would in itself be an appropriate choice, but it sets up a slippery slope to uninvite more and more speakers, or whether this is because uninviting in this particular case is already net negative for the movement.

2
Kirsten
I've also wondered about this.

Thanks for writing up your thoughts on the incident and showing that much respect to both sides of the argument!

I'm a bit confused about the last parts (7. and 8.):
1. Would a rephrasing of 8. as "Some of the people who spent a lot of time having private conversations with community members think that EA should be more cautious and attentive to diversity. And some of them don't. So we can't actually draw conclusions from this." be fair?
2. By whom is EA is presented as some kind of restrictive orthodoxy? So far, I did not get the i... (read more)

9
Aaron Gertler 🔸
1. Yes, you could rephrase it that way. I've spoken directly to the people who think we should be more cautious/attentive, but only heard secondhand from them about the people who think this is a bad idea (and have talked to lots of community members about these topics -- I've met people with views all over the spectrum who haven't had as many such conversations). 2. I was referring mostly to the comments that popped up in the various Twitter threads surrounding the decision, one of which I linked at the top of the piece. A few quotes along these lines: "Effective altruism has been shown to be little more than the same old successor-ideology wearing rationalism as a skin-suit." "They believe they are in a war and the people like Hanson are the enemy." "If EA starts worrying about PR and being inoffensive, what even is the point anymore? Make EA about EA, not about signaling." "There always was something 'off' about so-called effective altruism." Some of these types of comments probably come from people who never liked or cared about EA much and are just happy to have something to criticize. But I sometimes see similar remarks from people who are more invested in EA and seem to think it's become much more censorious over time. While there is some truth to that (as I mention in the piece), I think the overall picture is much more complicated than these kinds of claims make it out to be. ***** Regarding trade-offs, that would be a much longer post. You could check the "Diversity and Inclusion" tag, which includes some Forum posts along similar themes. Kelsey Piper's writing on "competing access needs" is also relevant.
Curated and popular this week
Relevant opportunities