A draft essay recently posted in the Diversity & Inclusion in EA Facebook group sparked conversation among EAs about what is helpful and what is harmful when it comes to diversity and inclusion in this community. 12+ anonymous contributors subsequently crafted the following article to explain the effects of alienating conversations for those in marginalized communities.
Content note
Under “What is it like?” condescending views towards women are mentioned. Under “Example topics” rape, abuse, victim blaming, eugenics, mercy killings, autonomy, usefulness-based worth and gaslighting are mentioned. Under “Unsure about a discussion topic?” a view that questions the equal worth of disabled people is mentioned.
Document purpose
This document is written by 10 EAs belonging to different groups that are underrepresented in EA, who have experienced feeling alienated by certain discussions within the EA community. We want to provide information about what kind of discussions we have experienced as alienating and why, so that some of the impacts of such discussions can be better examined.
It is our belief that underrepresented groups should have a place in the wider EA community. The scope of this document is not to argue for the inclusion of underrepresented groups in the EA Community.
This document has three audiences in mind: 1. Organizers/moderators for groups that are aimed at underrepresented people in EA. 2. Organizers/moderators for groups that want to create a welcoming and inclusive environment. 3. Individuals in the EA community who want to create a welcoming and inclusive environment around them.
This document will not tell you never to speak of certain topics. The intention of this document is to be informative, not prescriptive. These are difficult decisions and we do not intend to over-simplify them.
We want to provide information so that each person and group can better make decisions about when, how and with whom to have certain discussions. In this way we hope that this document may be a basis for clearer principles of moderation within EA groups.
Please note that on the topic of biases and oppression the references given in this document are intended to be springboards for those who are extra curious. They are not intended to give proof. The ideas we mention in these areas are widely accepted in social psychology and sociology and are added for context. The main purpose of this document is to share our own experiences.
The principle of limiting debate to increase inclusion
First of all, we might ask ourselves how limiting openness to different ideas within a community can ever lead to increased inclusion. We argue that being a part of an inclusive community can sometimes mean refraining from pursuing every last theory or thought experiment to its end in public places. This principle may be easier to understand if we use an extreme example: if someone in an EA group started seriously proposing, using evidence and reasoning, that women are lesser and that humanity would be better off if women were controlled by men, most group members would not want there to be a serious debate on that topic. This because 1. They want to spend their limited time discussing topics that they find more serious and productive rather than explaining to someone the reasons why their science is wrong, their reasoning is poor and their perspective is harmful, 2. They recognize that having people even entertain this possibility can be deeply alienating to women in the community.
Just as it would be exhausting and counterproductive if we had to repeatedly justify whether EA is a good idea in EA groups, it is exhausting and counterproductive when underrepresented groups have to repeatedly justify for instance their equality or that the group should be inclusive for them.
We acknowledge that there is sometimes a trade-off between being inclusive to those who prioritize free exploration of all ideas and being inclusive to those who want a respite from discussion they find exhausting or damaging. Like most EAs, we have ideals of openness and rational discussion. We also believe, however, that the current average level of openness in EA communities to ideas that negatively affect underrepresented groups is contributing to the alienation of many EAs. We feel that different levels of openness can be appropriate for different spaces.
On truth and limiting debate
We do not want to spend the majority of this document talking about truth, because we feel that if people in underrepresented groups are truly seen as equal then them feeling comfortable in this space should be seen as a goal in itself, not just a means to reach truth. However, we understand that some are worried that if we limit certain discussions to certain contexts this will also land us further away from truth. So we will briefly address this.
We believe that a core part of EA is questioning beliefs and assumptions to fully understand their basis and assessing arguments for their strength and validity.
If we were all objective and society was fair this would mean any topic could be discussed by anyone at any place in a way that progressed truth. But we are not objective and society is not fair. Our ideas about certain groups are informed by a history of oppression in which some groups have been seen as inferior to others. There are still systematic demographic differences in who holds the majority of the economic and political power and we all still hold conscious and unconscious biases. This has to be taken into consideration in order to reach truth. In this context assessing arguments for their strength must include thinking about the biases that exist against certain groups. Having discussions where all valuable input is heard must include thinking about who has more and less power. And occasionally, staying clear of a topic will lead to greater truth in the long run, because we have avoided the alienation of the very people who could aid our understanding.
Online forums like Facebook, that allow only brief comments lacking in tone, provide particularly poor arenas for sensitive discussions. As a result, these discussions can very often be unproductive, and can also carry harms, such as one-sided debate resulting in biased and false conclusions, as well as the exclusion and further reduced participation from members of under-represented groups.
We believe that adapting to the context of societal oppression when we decide how to have certain discussions will get us closer to truth. This because 1. We will be better at seeing our own and others’ biases in the discussion. 2. We will listen more fairly to people with experience and/or expertise in the discussion. 3. People with experience and/or expertise are more likely to voice their opinion because they won’t feel threatened by the way we are conducting the discussion. 4.If we believe that talented and altruistic minds are distributed across all demographic groups, it follows that alienating one of these groups will limit the ability of EA as a whole to find truth - because we are not retaining the “best” minds.
Are some discussions in EA groups alienating?
As people in underrepresented groups within EA, we repeatedly find ourselves exposed to content or debates that we find alienating. This causes many of us to limit our contact with the community.
It is hard to get accurate data on what proportion of people feel alienated and/or are pushed to limit their interaction with a community because they will tend not to respond to surveys in the community. However, in the Facebook group Women and non-binary people in Effective Altruism 47 out of 52 poll respondents say that they have at some point felt put off by a discussion relating to a minority identity (e.g. a discussion on women, gay people, or disabled people) in an EA group or community. This shows that while there are some people that do not have this experience, it is a very common experience within this group.
What’s more, about half (27 out of 51) of the respondents said they have reduced their participation in some part of the community due to such discussions.
Keep in mind that there may be a selection bias in both directions. We can imagine that people who feel alienated by some EA communities may be more likely to seek out a group for women and nonbinary people. At the same time the respondents are people who are active enough in EA to see and respond to this poll - people who completely leave the community could not vote.
Why is this happening?
There are a lot of reasons why people are initiating alienating discussions, not intervening to stop such discussions and not taking it seriously when people object. We want to address three such reasons: Underrepresentation, negative biases against certain groups, and a lack of understanding of the nature of oppression.
Underrepresentation
Some groups are underrepresented in the EA community. In 2017, 70 percent of respondents to the Annual Rethink Charity EA Survey were male, 80 percent were atheists, agnostics or non-religious and 89 percent were white[1]. No matter what group you belong to we all tend to see our own experience as the norm. This can lead certain perspectives to be ignored, simply because they are underrepresented.
For instance, it can be tempting to think that our own understanding of which types of debates are offensive is objective and that people who feel put off by debates that are not on our personal “offensive-list” are just being overly sensitive. If you belong to one underrepresented group but not the others you may have the experience of thinking that the topics listed below that involve your own group are clearly offensive (assuming that you have a similar background) but the topics regarding other groups are fair play. If you belong to none of these groups this tendency to see your own perspective as the norm can be harder to identify.
Bias
Another important factor is that evidence suggests that most people in society have some subconscious negative biases against people within these underrepresented groups [2]. As EAs we are not exempt from such biases against women, people of colour, disabled people, LGBTQ people etc. This makes us more likely to create and entertain theories about these groups which are based on negative stereotypes. Even people belonging to an underrepresented group tend to have biases against their own group [3].
These biases are likely to contribute to the voiced discomfort of this group being taken less seriously [4]. Biases cannot be overcome by simply focusing on making a community meritocratic [5], more structural solutions are usually needed.
A lack of understanding of oppression in wider society [6]
A third factor in why alienating discussions are being had in the EA community is that to what degree you understand the oppression of a certain underrepresented group in wider society, tends to depend on whether you belong to that group.
People from underrepresented groups tend to have a better understanding of their own group’s oppression for three reasons: They have experienced it personally, they have been more exposed to it through their community and they are more likely to take a special interest in learning about it academically.
Because people outside of the oppressed group do not share the same complex understanding of what ideas form part of an oppressive structure, they may tend to think that the reactions of people within this group to alienating discussions are overreactions. They may even brand the person as being “overly sensitive”, “biased”, “irrational”, or less willing to engage with ideas.
If an EA woman responds dismissively when you suggest that women are not really oppressed, it is easy to think that she is just a rude person. Or perhaps that she has a self-serving bias or doesn’t much care what the truth of the matter is. In reality, it is likely she is acting from extensive experience of having heard that same argument used as a tool to oppress women.
(History tells us that when the unfair treatment of a group is denied this serves to uphold structures of oppression against that group. It doesn’t matter how clear the unequal treatment has been, there have always been people who argued that it wasn’t oppressive. For instance, Ruth Bader Ginsburg describes how, when she worked as a lawyer on gender discrimination cases “the judges didn’t think sex-discrimination existed”[7]. This was at a time when for instance Idaho law still explicitly stated that males took preference over females as executors [8].)
The fact that most people who do not belong to the underrepresented group in question do not have a good understanding of arguments that have been used to oppress that group, can create an imbalance in what is perceived to be an alienating topic.
Previous experience as a person in an underrepresented group can also have a significant effect on how a discussion is experienced. Other than the emotionally taxing nature of feeling that one’s autonomy, worth, freedom or safety are being challenged, there is also a cost in the form of time and effort. These discussions take up a disproportionate amount of time for people in the affected group, something non-group members may not be aware of.
“Black and Third World people are expected to educate white people as to our humanity. Women are expected to educate men. Lesbians and gay men are expected to educate the heterosexual world. The oppressors maintain their position and evade their responsibility for their own actions. There is a constant drain of energy which might be better used in redefining ourselves and devising realistic scenarios for altering the present and constructing the future.” - Audre Lorde
How do people become alienated?
When people feel unwelcome in a group, they are unlikely to voice their discomfort, which can lead to the appearance of harmony. This can cause remaining group members to be confused as to why people from certain underrepresented groups don’t stick around. They may conclude that people from these groups simply aren’t as interested in the effective part of effective altruism, especially as there is evidence that suggests a broad range of people are biased towards seeing people who belong to these groups as less rational [9]. We argue instead that people from underrepresented groups are being put off by certain cultural aspects of the community, such as alienating discussions. A survey conducted by EA London [10] shows that women are just as likely to attend an initial EA event as men, but then are less likely to return.
What is it like?
Here is an example of what it may be like to be part of an underrepresented group when a post is made that relates to your group. While this is a hypothetical example, it is based on several real experiences of women in EA:
A man posts in a general EA group arguing that women on average are not really oppressed in this society. I know two things from my own experiences and from my studies: 1. Women have less economic and political power in society 2. The denial of the oppression of certain groups has been used to justify or ignore mistreatment of those groups through history and across cultures (see above). Oppression is an unfair treatment against a certain group. To deny oppression is to deny either the treatment of that group or the unfairness of that treatment.
Because of the history of this view I believe that it is likely to affect many people negatively if this view is spread. I want to protect myself and others from this. I think more specifically about the women who may see this, particularly the younger ones who are new to EA. I imagine that for some of them, it will lead them away from EA ideas and all the value EA engagement could have for them and for the world. Worse still, others may absorb the idea that the way women are treated is fair and shrink themselves, diminishing their own self-worth and diminishing the positive impact they could have on the world.
The problem is that I don’t know which one the poster believes. Is he unaware that women have less economic and political power in our society, or does he not think women having less power is oppression because people like me ought to have less power than people like him?
No one else is responding either to disagree with the poster or to tell him this isn’t the place for that debate. A few people, mostly (but not only) men, like the post. Unbeknownst to me many women are staying clear because they are too tired of another invalidation of their experience, and too afraid of being seen as irrational or overly sensitive, and many men are staying clear because they know they’ll piss people off on some side no matter what they say. Not knowing this at the time I’m wondering if they all agree with him.
I hesitate because I have a lot of work to do, and I know once I write something I will likely be faced with push back from the poster and some like-minded others. On top of that I’m likely to be painted in an unflattering light. However, I also believe that if the poster’s view goes unchallenged it will add another layer to the cultural sediment of women’s inferiority, it will further entrench this notion into the minds of those who already believe it and may sway some of those who were sitting on the fence towards this belief.
He uses all the right phrases about being open to changing his mind so I decide to give him the benefit of the doubt and show him the numbers. I stay patient and polite even though he seems to be defending treatment that is deeply harmful to me and people I care about and even though I am repeating information that is easily accessible through google and which I have given out many times before.
While he stays polite, it seems like he's not taking in anything I'm saying. Two days into the debate he makes a comment about women needing to be controlled “for their own good” and it becomes apparent he thinks women are not oppressed because men having power over women is the way things should be.
No men have intervened. I feel like I have just wasted hours on a debate that did not lead anyone closer to truth and has reawakened an old feeling of being devalued and unsafe due to my identity. And yet if I had not engaged, I would have made others in my group feel like they are even more alone. Not only did this discussion make it seem like it was up for debate whether the inequality between men and women is justified, it made it seem like most EAs agree with him that it is.
Example topics
In many cases it is not clear cut whether a debate should be discouraged, and judgment calls are necessary. A good start to making such calls is to know what kind of topics are felt to be alienating to people in underrepresented groups. Below we present examples of discussions in the community that have been felt to be alienating by some EAs who belong to underrepresented groups and who volunteered to contribute their ideas.
We believe that to understand what is off-putting to people within a certain underrepresented group, we need to listen to people within this group. We acknowledge that not everyone will find the same ideas alienating, nor is the list exhaustive. We by no means claim to speak for all people within these groups.
We provide this list so that each person and group in the community can decide for themselves what conversations to promote and limit in different settings based in part on an understanding of what is often off-putting to people from underrepresented groups.
Women and female-presenting people
· Whether it is or has been right or necessary that women have less influence over intellectual debate and less economic and political power
· Whether women need or have needed to have less freedoms and autonomy for their own good.
· (Warning: from personal experience one of the authors of this document highly recommends that you do not read this if rape is a trigger for you) http://www.overcomingbias.com/2010/11/gentlesilentrape.html
· Whether men or women are less competent in a certain area because of evolution [11]
· Whether it can be a woman's responsibility that she is raped, oppressed or abused
· Whether having sex with men is a moral obligation/whether women want too little sex
Disabled people
· Whether the lives of severely disabled people are worth living
· Whether killing a severely disabled person who isn’t able to consent is a mercy
· The idea that a person’s worth or right to have a decent life is dependent on their contribution to the economy, to others’ happiness or to the gene pool (whatever that means)
· The idea that certain people shouldn’t have children in order not to “contaminate” the gene pool (this includes people with low IQ)
LGBTQ+ people
· Whether gay people contribute to the survival of the species
· Whether trans people are “delusional”
· Whether the name and/or pronouns you ask people to refer to you as should be respected
· Whether trans people are trying to trick people into sleeping with them
People of Colour
· Whether there are IQ differences between different “races”
· Whether current inequalities are due to evolution/ intelligence
People from the developing world
· Whether we should value people in the developing world less because they are less productive
· Whether people in the developing world are poor because of character flaws
Religious minorities
· Whether Muslim majority countries have political and economic problems because of moral shortcomings specific to Muslims.
· Whether Muslims have a victim mentality
Working class people
· Whether poor people are poor due to having lower IQ
· Whether working class people are stupid
Minors (as an exception, this contribution was made not by a minor themselves but by a parent of a minor)
· Whether corporal punishment of children or not-yet cognitively mature people has utility.
Additional resource: Unsure about a discussion topic?
As an individual you may at some point want to have a discussion relating to an underrepresented group e.g. a discussion on access to safe abortion or whether disabled people’s lives are worth as much as abled peoples. Thinking about it, you may be worried that discussing this topic in the normal EA groups may be off-putting to people within the underrepresented group. However, you may not want to limit the discussion to only groups that are particularly geared towards uninhibited freedom of speech, because you may want to be sure to get feedback from a diverse audience. Here are our suggestions for what you might do in this situation:
1. Imagine who may feel put off by your post. If possible, do a google search and look at whether people advocating for this particular group have anything to say about the discussion or possible biases underlying it.
2. Find a close friend who belongs to that underrepresented group, tell them about your predicament and ask if they would be willing to give you their insight as a person who belongs to the group in question. If you have no close friends in this group, you might again want to reconsider if you are a good person to be having discussions relating to that group. If your friend says no to talking about the topic, respect that, they have no obligation to educate you on subjects that may be very difficult for them or may take up a disproportionate amount of their time.
3. Ask your friend if they think your theory is based on stereotyped views about the group in question. Your friend's lived experience is likely to mean that they have a good idea of the kind of stereotypes that people have about their group. Be willing to learn from their insight and don't take it personally if they think your theory is based on some stereotyped views.
4. If you still want to have a public discussion ask your friend if there is a non-alienating way for you as a person who does not belong to the group in question to hold a discussion on this issue, and if so how to conduct that discussion. They might for instance tell you that there are certain offensive stereotypes that you have to explicitly say that you don't agree with in the post, in order for people not to misunderstand you.
5. If you decide to have the discussion and it draws criticism from members of an underrepresented group, think about whether the way you are conducting your discussion may be alienating and show a willingness not only to move your discussion but to re examine your ideas bearing in mind the implicit biases against underrepresented groups that we have highlighted in this document.
Footnotes
[1] https://rtcharity.org/ea-survey-2017-part-2/
[2] Example, biases against women in science: https://www.ilo.org/wcmsp5/groups/public/---ed_dialogue/---act_emp/documents/publication/wcms_601276.pdf, https://www.cfa.harvard.edu/~srugheimer/Women_in_STEM_Resources.html,
Examples, biases against Black people: https://journals.sagepub.com/doi/abs/10.1111/j.1467-9280.2005.01664.x , https://www.ncbi.nlm.nih.gov/pubmed/17594129
Example, biases in healthcare settings: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-017-0179-8
Example, biases against arab-muslim men in hiring: https://www.sciencedirect.com/science/article/abs/pii/S0927537109000451
[3] Example: https://www.sciencedaily.com/releases/2018/12/181210165115.htm
[4] Example in healthcare: https://www.independent.co.uk/life-style/health-and-families/health-news/how-sexist-stereotypes-mean-doctors-ignore-womens-pain-a7157931.html study: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5845507/
[5] http://www.spp.uwa.edu.au/__data/assets/pdf_file/0019/2860012/The-Paradoxc-of-Meritocrary.pdf
[6] Here’s an introduction: https://socwomen.org/wp-content/uploads/2018/03/fact_3-2010-oppression.pdf
[8] https://territory-mag.com/articles/reed-v-reed-fight-gender-equality/
[9] Example: https://cogsci.yale.edu/sites/default/files/files/Thesis2016PavcoGiaccia.pdf
[11] Since it keeps coming up, here are a few resources about ethics and evolutionary psychology: http://bernard.pitzer.edu/~hfairchi/pdf/ScientificRacism.pdf https://link.springer.com/article/10.1023/A:1026380825208 https://www.goodreads.com/book/show/1137507.Evolution
While I appreciate your saying you don't intend to ban topics, I think there is considerable risk that this sort of policy becomes a form of de facto censorship. In the same way that we should be wary of Isolated Demands for Rigour, so too we should also be wary of Isolated Demands for Sensitivity.
Take for example the first item on your list - lets call it A).
I agree that this is not a great topic for an EA discussion. I haven't seen any arguments about the cost-effectiveness of a cause area that rely on whether A) is true or false. It seems unlikely that specifically feminist or anti-feminist causes would be the best things to work on, even if you thought A) was very true or false. If such a topic was very distracting, I can even see it making sense to essentially ban discussion of it, as LessWrong used to do in practice with regard Politics.
My concern is that a rule/recommendation against discussing such a topic might in practice be applied very unequally. For example, I think that someone who says
would not be chastised for doing so, or feel that they had violated the rule/suggestion.
However, my guess is that someone who said
might be criticized for doing so, and might even agree (if only privately) that they had in some sense violated this rule/guideline with regard topic A).
If this is the case, then this policy is de facto a silencing not of topics, but of opinions, which I think is much harder to justify.
As a list of verboten opinions, this list also has the undesirable attribute of being very partisan. Looking down the list, it seems that in almost every case the discouraged/forbidden opinion is, in contemporary US political parlance, the (more) Right Wing opinion, and the assumed 'default' 'acceptable' one is the (more) Left Wing opinion. In addition, my impression (though I am less sure here) is that it is also biased against opinions disproportionately held by older people.
And yet these are two groups that are dramatically under-represented in the EA movement! (source) Certainly it seems that, on a numerical basis, conservatives are more under-represented than some of the protected groups mentioned in this article. This sort of list seems likely to make older and more conservative people feel less welcome, not more. Various viewpoints they might object to have been enshrined, and other topics, whose discussion conservatives find disasteful but is nonetheless not uncommon in the EA community, are not contraindicated.
For a generally well-received article on how to partially address this, you might enjoy Ozy's piece here.
Ozy also wrote a response to this article which agrees with some of your points:
https://thingofthings.wordpress.com/2019/05/15/a-response-to-making-discussions-in-ea-groups-inclusive/
I am not yet convinced that changing the content of conversation in the ways mentioned above is either necessary or sufficient to bring more diverse voices into the community. I offer counterexamples below.
Changing content is not necessary to increase diversity – law schools have become more diverse without changing the content of their discussions. Most US law schools now have roughly 50-50 male-female gender ratios (https://www.enjuris.com/students/law-school-gender-ratio-2017.html) even though the first-year mandatory curriculum often asks students to consider arguments women might find uncomfortable (e.g., what proof should we require raped women to show before we convict the alleged rapist, should battered women acting in self-defense be treated the same as other people acting in self-defense).
Changing content is not sufficient to increase diversity – the Young Adult Fiction Twitter community is intolerant and toxic (often towards members of underrepresented groups) even though it cares deeply about principles similar to those mentioned above (https://www.vulture.com/2017/08/the-toxic-drama-of-ya-twitter.html).
My priors: I am a woman, a POC, and from a developing country, and I suspect I generally have similar opinions as the writers of this post about the topics mentioned under “Example topics.” I also believe that (usually) the best way to fight bad free speech is with good free speech.
Necessity/sufficiency tests are too narrow. Aid is neither necessary nor sufficient to end poverty, but we do it anyway.
I think your examples highlight a difference between barriers to entry rather than the amount of alienating conversations.
With twitter almost anyone can take part anonymously whereas to get into law school you probably have to take part in interviews/get references where you don't alienate people.
The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested "topics to avoid" are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don't see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.
If the topics to avoid are irrelevant to EA, it seems preferable to argue that these topics shouldn't be discussed because they are irrelevant than to argue that they shouldn't be discussed because they are offensive. In general, justifications for limiting discourse that appeal to epistemic considerations (such as bans on off-topic discussions) appear to generate less division and polarization than justifications that appeal to moral considerations.
I don't mind when people talk about off-topic subjects in casual EA spaces (machine doggo) as long as it's not harmful. I think it's a good thing for people to have social spaces and to bond over things that aren't intense philosophical discussions!
I also don't mind if people talk about painful and difficult subjects as long they've tried to minimize the harm and the conversation is genuinely useful.
I do mind if people casually bring up hurtful topics which provide no discernable gain to the community.
Not a downvoter - I think this discussion is inevitable and may as well happen here and now - but it seems obvious why the downvotes were silent; the above is the dominant view amongst employers (most media, all corporations, nonprofits, civil services); so dissenting from it under your own name is a liability.
I found this piece thoughtful and full of sensible caveats, and I liked the survey, even if it was highly nonrandom. It sets a worrying lower bound on the size of the problem and makes me update a bit toward taking alienation seriously. But it does little else - where is the cost-benefit analysis? You mention that the proposal removes a safe space for people without soft skills who just want to know the truth. Why is this cost necessarily less than the assumed and unestimated cost of alienating others? Why is this the place to give unempirical deontology a pass?
Consensus in these two fields seems to be being used as a proxy for truth. However, what social psychologists say is known to be insufficient for this purpose, because the methods generally used there systematically produce false significance. This is relevant here because several irreplicable soc. psych. effects are load-bearing pillars of the above kind of proposal (for instance stereotype threat and implicit association testing). This unreflective endorsement of a field with such bad epistemics doesn't inspire much confidence, since what is being asked is that the above fields get a free pass, that everyone act as if they were true.
It is easily possible that the proposal is the right choice for a movement which wants to grow and diversify without bringing shaming and smearing upon itself. The above does almost nothing to show that this is the case.
The proposal - for us to self-censor - also comes a week after SSC was forced into censorship by extreme proponents of the above view, and so might be suffering from association with that. I'm writing this as a throwaway, not because I think the OP would retaliate, but because there are demonstrably others who would.
I wish the following didn't matter but I fear I have to put it, to be taken seriously: I'm working-class and disabled.
When I first came across this post, it had 10 down-votes. People who are down-voting, can you please explain why? To just down-vote seems unproductive.
One reason I can think this may be happening is that people who are down-voting are reading OP to be saying "I am not welcome to open discussion and new ideas" so they see no point in attempting to discuss. I don't think that is an accurate view of the post however. And even if it were, it doesn't help the other forum-goers who would like to see the opposing points made which are supposedly sound enough that some portion of EAs had no problem putting the post pretty deep in the negative.
[I didn't downvote.] I fear the story is that this is something of a 'hot button' issue, and people in either 'camp' have sensitivities about publicly speaking out on one side or the other for fear of how others in the opposing 'camp' may react towards them. (The authors of this document are anonymous; previous conversations on this area in this forum have had detractors also use anon accounts or make remarks along the lines of, 'I strongly disagree with this, but I don't want to elaborate further'). Hence why people who might be opposed to this (for whatever reason) preferring anonymous (albeit less-informative) feedback via downvoting.
There are naturally less charitable explanations along the lines of tribalism, brigadeing, etc. etc.
Are you implying that every time someone downvotes a post they should provide an accompanying explanation of their decision? If not, what makes this post different from others?
I'm not OP but my thoughts -- I agree that when I see _a lot_ of downvotes on a seemingly reasonable post that had a decent amount of work and thought put into it and _no one_ explains why, I think there is a collective action problem that could discourage future contributions and weaken discourse. So while I wouldn't think any one individual should be obligated to explain their downvotes, I think the community in aggregate does have such an obligation in these cases where there are a lot of downvotes and there is no clearly obvious reason why (e.g., obvious spam).
That makes sense.
I strongly agree with Peter's thoughts on this, and wrote up my own reasoning here. Basically, I read a downvote to mean either "I think you're wrong" or "there's a definite factual error here" and it's frustrating to see lots of downvotes with no indication of what might be wrong.
Sometimes, I can guess pretty easily at why I think someone might have downvoted, but a post's author might not be in the same position (very frustrating for them), and I don't want to speak on a downvoter's behalf by speculating about their hypothetical beliefs (which I may well get wrong).
It takes a long time to craft a response to posts like these. Even if there are clear problems with the post, given the sensitive topic you have to spend a lot of time on nuance, checking citations, and getting the tone right. That is a very high bar, one that I don't think is reasonable to expect everyone to pass. In contrast, people who agree seem to get a pass for silently upvoting.
That's a reasonable objection. I wouldn't mind seeing even a non-nuanced response (e.g. "I think this post undervalues the utility of X compared to Y") rather than no response, but many other readers don't share my preferences and might take issue with that kind of comment (especially for this topic). And of course, mobile users are especially disadvantaged when it comes to comment-writing.
Still, if someone is a downvoter and wants to do something helpful for others in the same situation, creating one critical response that can then be upvoted (showing the relative popularity of objection X vs. objections Y, Z, etc.) seems unusually valuable.
The document above argues for a certain level of self-censorship in EA spaces. Some comments have made the leap to the connected issue of group censorship by a moderator. I've written up an example of when someone did not self-censor and listed a couple of questions.
Concrete example: In the Christians in EA group, someone (who had AFAIK never posted before) posted a 60-page document. This document outlined his theory that the best EA cause was to support the propogation of Mormonism, because any civilization based on the equality of men and women was doomed to fail (he saw Islam as another viable civilization, but inferior to Mormonism).
He wanted me to debate him point by point in his argument. I was not willing to argue with him, because it was a waste of my time. At some point, the moderators of the group took some action (I can't recall if they commented to say they didn't support the post, or if they deleted it).
Questions: -If someone posts with very significant errors, should the community spend time correcting those errors? Does it matter how much the person has contributed to the community so far? -If the opportunity cost of correcting the errors is too high, what should we do instead? -Would it have been more altruistic for the man in question to self-censor? -What's the cost of doing nothing when 'politically incorrect' posts come up?
Like many other problems that EAs are aware of, the particular incident you described comes from an outlier that drives the mean significantly forward (I of course know who you are talking about, and the fact that many who've been in EA for a long time know as well should indicate that this is both rare in terms of % of people yet perhaps not that rare in terms of % of drama it accounts for).
The key insight here is that the long-tail matters. As a rough prior we could anticipate that 80% of the drama will come from 20% of people (in my experience this is even more skewed, to perhaps 98% of drama coming from 2% of people). In which case, advocating for self-censorship in general in the community is stifling and unnecessary for the bulk of people (who already doubt themselves), and desperately necessary for the outliers who just march forward without much self-awareness in some or other controversial direction, as if mandated by a higher power to cause as much drama as possible.
If we recognize that the problem per person follows a long-tail distribution, our strategies should look very different than if it was a kind of normal/Gaussian distribution.
That's a good point. What kind of strategies do you think we should adopt?
I think just not answering or downvoting is enough. You can point out that there is not sufficient evidence for you to take the time to study this long document which also seemed crackpotty to your quick skim. That is the truth, after all. If the crackpot repeatedly spams, well spamming warrants a ban.
Thanks for your thoughts.
I'm a white male, and I view my own comfort in debate spaces merely as a means to reach truth, and welcome attempts to trade the former for the latter. Of course, you may be thinking "that's easy for you to say, cause you're a white male!" And there's no point arguing with that because no one will be convinced of anything. But I'm at least going to say that it is the right philosophy to follow.
Note that even if we ignore all these considerations, there are still educated people and uneducated people, kind people and unkind people, persuasive people and impertinent people, and all kinds of other differences that will lead to irrational collective epistemology. So I don't see how this calls for any meta-level shift in discussion rules and norms. In the same way that we informally figure out that, say, people with biology PhDs know more about biology, we can informally figure out that women know more about what it's like to be a woman, and so on.
The reverse of these things can happen when oppressive/offensive speech is stifled. But of course, there is a balance to be had if you want to appeal to as many people as possible. I'm not going to be a free speech absolutist, of course.
Yes, but this goes both ways. There are political and cultural groups where the same ideas are not viewed as so offensive, and the way that we treat these ideas can change our perceptions of how offensive they are. The best example for this is trigger warnings, which research has shown to increase the level of triggering that people get from the unwanted content. You can also look at the differences in how the same nominally offensive things are perceived by people of the same demographic in different cultures - such as conservative and religious culture in the United States, or people outside the West, basically anywhere besides contemporary liberal and leftist circles. So, while we obviously can't ignore facts about people being offended, it's worth taking some time to think about what might be done to mutually build resilience and trust in EA, rather than just taking everything one way into political correctness treadmills or internecine identity politics.
And we see the key issue here:
One of the major problems driving social justice fear and offense in the US right now is the failure of right-wing and centrist actors to credibly demonstrate that they're not secretly harboring bias and hate. If I was going to pick something that activists for underrepresented demographics need to revise when they look at EA, it's that they should stop applying their default suspicions to the people in EA. If you think the Charles Murray or whoever has got to be shut up because the angry masses are looking for a demagogue to give them an excuse to oppress people, fine - that's a judgment about society writ large, it's on you and I don't know whether you're wrong or right. But when people are already committing to improve the well-being of the world as much as possible, and when they are making personal sacrifices to be involved in this effort, and when they are accepting our paradigm of philosophical and scientific rigor, the least you can give them is a basic element of trust regarding their beliefs and motivations. They aren't bad people. It's OK if you don't fix their beliefs. They're not going to set fire to Western society. There are way too many smart, diverse and educated people in EA for that sort of thing to happen.
And then you'll feel better about them! And they won't assume that you're just here to shut out wrongthink, and so on for every dispute that people can have. It's an important part of building solidarity in a movement.
Of course this is a two-way street: on the flip side, people need to tackle politically incorrect topics in ways that make it easy for other people to extend the benefit of the doubt. If people are idly speculating (or worse, ranting) about very controversial issues, that's pretty tough. What we want is for people to start with serious important questions about the world, and then use arguments to answer the question if-and-only-if they help, because (a) it is way more productive, and (b) it credibly demonstrates that people's motivations are coming from a good place and their takeaways will be good.
I moderate an EA social media group. In the (probably) thousands of posts that I've gotten on it, not one person has ever done the former - "hey guys, what do you think of the IQ gap between races?" and so on. If they did, I'd delete it, because it causes all sorts of problems (as you helpfully point out) while lacking a clear relation to EA issues. But there was exactly one case where someone asked about a controversial side of a specific EA issue, whether lifesaving in Africa is bad because it would purportedly lower average global intelligence. I didn't delete it, because that person's motivations and takeaways related to the very important issue of cause prioritization. Instead, I wrote one comment that persuaded them to rethink the issue, and that was the end of it. For a variety of reasons, I am skeptical of the idea that it's unduly hard to "educate" (interesting word choice) people about these race and gender issues, compared to convincing them on any other topic in EA.
As you might imagine, I don't personally worry much about this since it comes up so rarely in my own lane (it sounds like things are different in some other places). But I will try to remember the things you've written in the future!
You identify the number one issue you have with activists from demographic groups being that they are suspicious of EA motivations.
And you claim that they shouldn't be because 1) EA's are trying to improve the world as much as possible, 2) EA's make personal sacrifices for such, and 3) they accept the paradigm of philosophical and scientific rigor.
My question: is this belief because you are an effective altruist, or because these criteria are sufficient to indicate positive intentions towards minorities 100% of the time?
An example: a devout traditionalist Buddhist male might also believe that they are trying to improve the world as much as possible, given the guidelines of their religious/spiritual tradition. They might very well also make personal sacrifices for such, and they may well be doing so in a paradigm of philosophical rigor. Buddhists might also claim that the way they achieve these things is scientifically rigorous (there's a famous quote by the Dalai Lama where he says that if science refutes part of his teachings, the teachings must be rejected). But if said (male) Buddhist was raising questions about whether women deserve equal rights to men, does the fact he satisfies your three criterion mean we should assume he has positive and respectful intentions towards women?
We could replace Buddhism with any religious or spiritual worldview that is compatible with western science. I'm more familiar with Buddhism, which is why I make that my example.
My guess is that for most people who have read/upvoted your comment, the answer would be no. The reason would be because none of these comments reveal whether someone harbors bias or is malevolent about certain things. If you harbor an implicit belief that other genders are inferior to men, for example, then no matter how much you care about bettering the world, and how many sacrifices you make to better the world, your intentions would still be about bettering the world for {approved of population/men}. Philosophical and scientific rigor don't help either; although I'm not well versed in the history of racism, I do know that science and philosophy have been used to espouse discriminatory views plenty of times in the past. See craniometry and Immanuel Kant (whose original work discriminates against black people iirc)
The criteria by themselves are sufficient to indicate that benign intentions are 90% likely. The remaining 10% chance is covered by the fact that we are Effective Altruists, so we extend the benefit of the doubt for a greater purpose.
If we were Buddhists, then yes except for the fact that I am mainly talking about the offensive things that people really say, like "the variability hypothesis explains the gender disparity in academia" or "women tend to have worse policy preferences than men" and so on, which are not simple cases of rights and values.
For the most incendiary issues, there is a point where you would expect any EA to know that the PR and community costs exceed the benefits, and therefore you should no longer give them the benefit of the doubt. And I would expect such a person, if they support EA, to say "ah I understand why I was censored about that, it is too much of a hot potato, very understandable thing for a moderator to do." Again, just the most incendiary things, that people across the political spectrum view as offensive. Some kinds of unequal rights would be like that. But there are some issues, like maternity leave or child custody or access to combat roles in the military, where people commonly support unequal rights.
Even if you held such a belief, it does not follow that you would disregard the rights and well-being of women. You might give them less weight but it would not matter for most purposes, the same charities and jobs would generally still be effective.
Science has improved, we know way more about people than we used to. I presume you would agree that the best science doesn't give people reasons to give unequal rights to people. Every wrong view in the history of science has been justified with science. So what do we do about that? Well, we have to do science as well as we can. There are no shortcuts to wisdom. In hindsight, it's easy to point at ways that science that went wrong in the past, but that's no good for telling us about current science.
If someone has the right philosophy, then sharing better information with them will generally just help them achieve the same values that you want. If they don't have the right philosophy then all bets are off, you might be able to manipulate them to act rightly by feeding them an incomplete picture of the science. But that's why the EA/not-EA distinction clears things up here.
Can you clarify why you think your three criteria are enough to ascribe benign intentions the majority of the time? The point I was trying to get at was that there’s no relation to thinking a lot about how to make the world a better place and making sacrifices to achieve that AND also having benign intentions towards other groups. People can just more narrowly define the world that they are serving.
A concrete example of how believing women have less worth than men could be harmful in evaluating charities; one charity helps women by X utils, one charity helps men by X utils. (Perhaps charity #1 decreases the amount of work women need to do by having a well for water; etc.). Believing women have less worth than men would lead to charity #2 strictly dominating charity #1 when they should AC tually be equally recommended.
In terms of people having the ‘right’ philosophy — what I’m saying is that there’s nothing inherent to EA that prevents it from coexisting with misogyny. It’s not a core EA belief that women are equal to men. So we shouldn’t be surprised that EA’s may act as misogynists.
In any case, you admit that your criteria aren’t sufficient to screen out all negative intentions. When you say we give the benefit of the doubt for the sake of the EA project, what you’re saying is that demographic minorities need to accept some level of malevolence in their communities in exchange for the privilege of contributing to the EA cause. Why should the burden be on them? Why not place the burden (if you can even call it that) on individuals who don’t have to worry about this systematic malevolence — which is what this document suggests we do — to think about what they say before they say it.
(I’m not going to address each of your rebuttals individually because the main points I want to defend are the two I’ve tried to clarify above.)
It seems right based on all my experience talking to people, seeing what they say, considering their beliefs, and observing their behavior.
Well in EA we don't "just more narrowly define the world that we are serving". We have philosophical rigor.
There are people in EA who believe that animals have a lot of value, so they don't give money to charities that help women (or men). Are they harming women? What should we do about them?
What do you mean by equal? It's a core EA belief that the interests of women are equally valuable to the interests of men.
Also, your claim is that we should hide facts from people in order to prevent them from achieving their goals. This is only truly justified if people are actively trying to make life worse for women, which is obviously antithetical to EA. The mere fact that someone thinks women should be treated or prioritized a little differently doesn't necessarily mean that giving them facts will make their behavior worse under your view.
EA is not for malevolent people, EA for people who are trying to make the world better.
If you are worried about people lying to infiltrate EA, that's not going to change no matter what we do - people could lie to infiltrate any group with any rules.
The EA cause is not a privilege. It's a duty.
In my original comment, I explicitly said that it's a two-way street.
The reason that it's a two-way street is that when these kinds of issues are only resolved by making demands on the offending party, the conflict never ends - there is a continued spiral of new issues and infighting.
Consider the possibility that the philosophy you mention is not as easy for everyone to follow as it is for you. When the entirety of society is built with your comfort in mind, it's very easy to sacrifice some of it as a means to reach truth, especially as as soon as you leave the debate space you can go back to not thinking about any of the discomfort you experienced during the discussion. You are safe putting all of your emotional and intellectual energy into the debate space, knowing that if the conversation gets too much for you, you can opt to leave at any time and go on with the rest of your day.
If, however, someone lives in a world where every day includes many instances of them being made uncomfortable (even if each instance might seem trivially small to someone who only sees one), which they have no option to switch off, that person cannot safely put all of their emotional and intellectual energy into a debate space which asks them to sacrifice their level of comfort. They don't have the option of participating in a discussion until it becomes too much, because they have to save energy with which to get through the rest of the day. As a result, they are not able to participate in the debate on an equal footing (if they even choose to at all, given the emotional effort involved).
Well that is what I said someone would say, and what I said I wasn't going to argue about. I am aware of the kinds of arguments you're making. And I have lived in places where everything around me was very much not built around my comfort, I was in the military where I could not switch off in the slightest.
To be clear, I didn't make the above point in order to say "you should feel bad because you're white and male". I also didn't make it to say "you should just shut up and defer to the opinions of others here because you're white and male". I made it to try to explain why the choice to say "everyone just needs to suck it up and deal with their own discomfort" is not a choice with no downside; it puts the debate on an uneven footing, where people are not able to participate equally. It doesn't seem a stretch to then say that debates where some people are at an inherent disadvantage from the start are not as self evidently optimal as a truth seeking exercise as it may first seem.
I did not think you were trying to say either of those things. Again, I'm aware of these things that people say. I was just talking about the philosophical basis for judging strategies here.
A few EAs decided to work together to write an in-depth, section-by-section response to the above post. Please note that it discusses issues which were listed in Making Discussion Inclusive as potentially alienating.
Identifying this is a start, but it remains unclear to me that this post is likely to result in any action that will change anything (realizing some people may disagree that this is the experience of some people in the community or that their experience of alienation matters). But supposing you agree that this post describes a real problem and the problem deserves solving, what are things we might do as a community to be more inclusive?
I'm thinking here of asking for specific, actionable ideas and not just generic stuff like "spread awareness", and additionally these need to be actions that will be carried about by the people who care about this to make the community different than it is today and not demands for people who are in the alienator group to change because that's also unlikely to be an effective strategy. I imagine most actions that would work well would be of the form "I want EA to be more inclusive, and to make it that way I'm going to do X". What is X?
The post answers that question - don't say the inflammatory things listed, or if you must, follow the steps listed first. (Note: Explanation, not necessarily endorsement.)
This seems to miss the point of my question, because it already seems to be the case that the people who could do something already don't much engage in these discussions. Rather it's primarily the folks who are causing the feelings of alienation and do not themselves feel alienated that are starting and engaging in discussions that cause feelings of alienation for others, and presuming they do so because they either do not consider their actions to be contrary to the purpose of inclusiveness or because they do not value inclusiveness, what actions can those who are alienated or value inclusiveness take that would address this issue. That is, if you feel there are things being done and said that cause alienation, how do you get that to stop other than just hoping that other people decide on their own not to do it anymore?
This is great. Wish it had been taken seriously in 2019. Deserves to be back on the front page!
Thanks for the post, I found this post and the following post Why & How to Make Progress on Diversity & Inclusion in EA to be very useful.